True messages are more persuasive and more likely to be shared than fake messages, according to a new study published in . Journal of Personality and Social Psychology. The results, drawn from four large-scale experiments, challenge the widespread belief that misinformation naturally spreads more effectively than accurate information.
Concerns about the impact of false information have grown in recent years, particularly as misleading claims have been linked to delays in climate action, public health problems and loss of trust in institutions. Previous research has shown that falsehoods can spread rapidly on platforms such as X (formerly Twitter), leading many to conclude that lies have an inherent advantage in the digital environment. But new research suggests this pattern may be shaped more by the design of social media platforms than by human preferences.
Researchers led by Nicholas Fay at the University of Western Australia sought to examine how people react to true and false information when the influence of algorithms, bots and platform incentives is removed.
The research team conducted four experiments with a total of 4,607 participants (ages 18 to 99). Two experiments focused on “persuasion games,” in which the goal was to create short messages to convince others of a point. Two other experiments focused on “attention games,” in which the goal was to write messages designed to capture as much attention as possible.
In the first and third experiments, human participants wrote the messages. They were randomly assigned to base their messages on information they believed to be true, information they believed to be false, or no constraints at all. In the second and fourth experiments, messages were generated by the artificial intelligence model GPT-3.5 using the same constraints. A separate large group of human participants then rated all messages for authenticity, persuasiveness, emotional tone, and shareability.
The results were consistent across all four experiments. Messages written with the intent to be true were rated as more persuasive and more interesting, producing a stronger belief change in the direction of the claim. In contrast, false messages often made participants less likely to believe the claim. True messages were also more likely to be shared both online and offline.
But researchers found that the main reason people chose to share information was not the truth itself. Instead, sharing was primarily driven by the positive emotions the message evoked and the extent to which it facilitated social interaction.
The experiment also revealed that messages generated by GPT-3.5 were consistently rated as more persuasive and easier to share than messages written by humans, especially when the AI was instructed to generate truthful content.
Another notable finding was that when participants were free to write persuasive messages without constraints, they tended to be truthful by default. Their unconstrained messages were rated about as true as those written under explicit instructions to be accurate.
This tendency was slightly attenuated when participants were asked to write attention-getting messages, but their messages were still significantly more truthful than those written under deceptive instructions. Importantly, the researchers noted that softening the truth to make a message more attention-grabbing doesn’t actually increase user engagement or sharing intent.
Fay and colleagues concluded: “Our findings suggest that people, both as producers and consumers of information, are more likely to know the truth. This is consistent with research showing that the majority of online misinformation is spread by a small number of super-sharers.”
This study acknowledges some limitations. For example, the experiment was conducted in a controlled environment and may not reflect the complexity of real-world information ecosystems. Participants were primarily from Western academic backgrounds, and the role of repetition, social networks, and source credibility was not explored.
The study, “Truth against falsehood: Experimental evidence about what persuades and spreads,” was authored by Nicholas Fay, Keith J. Ransom, Bradley Walker, Piers DL Howe, Andrew Perfoss, and Yoshihisa Kashima.

