Recent research published in Proceedings of the National Academy of Sciences It suggests that our pre-existing beliefs deeply influence how we learn new information in everyday life. By tracking eye movements and decision-making during a simulated news rating game, scientists found that people learn easily from rewards that are consistent with their existing views, but struggle to adapt when rewards go against their preconceptions.
These findings provide evidence of cognitive pathways that allow misinformation to persist in modern digital environments. This dynamic explains why simply presenting factual corrections is not enough to change minds.
People are increasingly relying on social media platforms for their daily news consumption, where automated algorithms tend to filter content according to users’ existing preferences. This digital environment provides a fertile ground for misinformation to spread rapidly among large numbers of people, raising questions about why individuals continue to believe falsehoods when objective fact-checking is readily available.
“I started seriously considering research in this area in 2021, after seeing firsthand the damage caused by misinformation during the COVID-19 pandemic, especially in relation to vaccination campaigns,” said study author Stefano Lasaponara, associate professor at the Department of Psychology at Sapienza University of Rome. “That experience led me to think about the extent to which fake news can influence not only what people believe, but also how they learn from their feedback and experiences.”
Lasaponara and his colleagues sought to understand how a person’s pre-existing judgments and internal self-confidence interact with the way they learn from external feedback. They designed this study to test whether our tendency to prefer consistent information in our beliefs may be rooted in fundamental everyday learning mechanisms. By investigating these fundamental learning processes, the authors hoped to uncover why it is so difficult for people to update their opinions when faced with misleading news articles.
To investigate these questions, the scientists recruited a final sample of 28 healthy young adults between the ages of 18 and 36 to participate in a detailed three-part experiment. In the first phase, participants viewed a set of 324 recently circulated news headlines on popular social media platforms. Half of these selected headlines contained real news, and the other half contained completely false information. Participants had to read each headline on a computer screen and decide whether it was true or false.
They also bet hypothetical amounts ranging from 0 cents to 99 cents on the answers provided. This financial stake served as a measurable indicator of their internal confidence in each particular news item. Based on these responses, the scientists grouped the headings into four separate categories for each participant. These customized categories included news judged to be high-confidence truth, low-confidence truth, high-confidence fake, and low-confidence fake.
At this stage, the researchers used special eye-tracking glasses to measure the dilation of the participants’ pupils while they read. Pupil dilation is an involuntary physical response that indicates mental effort, focused attention, and physiological arousal. By measuring these subtle responses, the team was able to track brain activity in real time without interrupting participants’ work.
In the second phase, the researchers tested how well participants could learn new rules based on their previous judgments. Participants played a computer game in which they had to choose from the pair of headings they had just evaluated in the first phase. The goal was to select a specific headline that would earn a virtual monetary reward of 20 cents. Unknown to the participants, rewards were not randomly assigned throughout the game.
In different rounds of the game, the 83% chance of winning a reward was associated with a specific category established during the initial evaluation. For example, in one round, participants were rewarded for choosing a headline that they had previously judged to be true. In another round, participants were rewarded for choosing a headline that was determined to be fake. In other rounds, choices were rewarded based on high or low confidence, and in one round, they were rewarded completely randomly as a baseline comparison.
The third and final phase tested whether the learning game changed participants’ thoughts about the news items. The scientists re-showed participants the original headline, their initial truth/false judgment, and the associated confidence bet. Participants were given the option to confirm their initial judgment or change their mind completely. If the final answer matched the actual real or fake status of the news, they kept the money they wagered as their final payout.
The results of the learning phase showed that participants learned very differently depending on the hidden rules of the computer game. When the game rewarded participants for choosing headlines they already believed to be true, they quickly learned winning strategies and received high scores. On the other hand, when the game rewarded people for choosing headlines that appeared to be fake, their performance decreased. Participants also had a harder time understanding the game’s hidden rules when rewards were tied to trust levels rather than beliefs about the truth.
“One of the key lessons is that our previous beliefs can begin to shape our decisions even before we make them explicitly,” Rasaponara said. “In our study, these pre-existing beliefs were strong enough to influence learning itself. More broadly, this suggests that we should approach new information as critically and openly as possible, and try to evaluate it as soon as possible without filtering it through preconceptions.”
To understand the underlying mental strategies, scientists used computational modeling to create mathematical simulations of the human decision-making process. The model revealed that when rewards match participants’ truth beliefs, participants use broad, generalized rules to make choices.
When the rewards no longer matched their sense of truth, participants abandoned these broad generalization strategies. Instead, they reverted to simply reacting to positive and negative feedback on a trial-by-trial basis, which turned out to be a far less effective way to navigate the game.
Eye-tracking data provided physical evidence that our beliefs influence our nervous systems before we make conscious choices. Initially, participants’ pupils dilated more widely when looking at headlines that they would later make confident judgments about. This remarkable expansion suggests that strong subjective beliefs trigger an initial physical arousal response within the body. During the learning phase, students expanded when they faced mental conflict, such as having to choose between a strong belief and a competing reward signal.
“We expected to find pupillary effects related to the moment of decision itself, but we did not expect to observe pupillary effects at an early stage when belief-consistent choice tendencies are formed,” Rasaponara said. “This was particularly interesting because it suggests that the influence of prior beliefs may begin to become apparent before an overt response is made.”
When participants received feedback that contradicted their established beliefs, their pupils also dilated, indicating cognitive surprise and increased mental load. During the final feedback stage, participants showed a strong tendency to stick to their original opinion about the headline. They were unlikely to change their minds, especially if they placed high confidence bets at the beginning of the experiment.
Interestingly, when confidence was high, people were more reluctant to change their minds, regardless of whether the headline was actually true or false. Participants were slightly more willing to update their beliefs if they initially expressed lack of confidence in their decisions. Although this study provides detailed evidence of how subjective beliefs shape learning, there are also potential misconceptions and limitations to keep in mind.
Because this study required participants to experience different reward rules all in succession, it is possible that the rules learned in one round influenced their behavior in the next round. “An important caveat is that this study still cannot make strong claims about correcting misinformation or when and how people really change their minds after learning,” Rasaponara explained. “Our results show that prior beliefs can bias reinforcement learning, but we don’t yet know how to reliably reverse that bias, which is what we’re currently working on in follow-up research.”
The experiment also relied solely on political and social news headlines. That is, these learning patterns may look different if the topic is neutral or completely unrelated to current events. Future studies may extend these physiological findings using different types of information to see if this learned behavior applies to other areas of human life.
“Our broader goal is not only to better understand why people believe fake news, but also to identify the conditions that make misinformation less effective,” Rasaponara added. “Follow-up research is investigating whether different reinforcement structures lead to different degrees of belief updating and how computational models can help explain when people resist correction and when they become more flexible.”
In addition to changing the reward rules of computer games, scientists can also design experiments that explicitly present direct evidence that contradicts participants’ beliefs. This alternative approach helps map out the exact situations that might ultimately prompt people to update their most stubborn opinions.
“The title is also a little homage to Metallica, who I’m a big fan of,” Rasaponara added. “More importantly, this research would not have been possible without the contributions of my co-authors, especially Valentina Piga and Silvana Rosito. Their contributions were the backbone of the project.”
The study, “In the eye of the beholder: Pupil responses reflect how subjective prior beliefs shape reinforcement learning with fake news,” was authored by Silvana Rosito, Valentina Piga, Sara Lo Presti, Angelica Scuderi, Fabrizio Doricchi, Massimo Silvetti, and Stefano Lasaponara.

