Research published in journals F1000 research in 2023 suggests that certain personality traits, particularly honesty and agreeableness, can predict how confident young people are in their ability to spot deepfake videos. The findings provide evidence that our underlying psychological makeup shapes our perceived vulnerability to deception in advanced artificial intelligence.
Deepfake technology uses artificial intelligence to create highly realistic manipulated videos and audio recordings of real people. These programs study thousands of images and audio clips to produce synthetic media that depicts people saying and doing things that never actually happened. These digital forgeries are becoming harder to distinguish from reality, posing a growing threat to personal privacy and accurate information.
Scientists wanted to understand why some people feel more able to recognize these digital forgeries than others. A person’s belief in their ability to succeed in a particular situation is known in psychology as self-efficacy. Past research has shown that self-efficacy is often strongly influenced by basic personality traits.
By investigating these underlying psychological characteristics, the researchers aimed to uncover how different personality profiles influence a person’s confidence in identifying deceptive media. Understanding this relationship can help scientists develop better strategies to improve digital literacy and media resilience.
“As a social psychologist, I am fascinated by the intersection of human integrity and the evolution of technology, and I am interested in how information technology is not a neutral tool but can be used to enhance power by manipulating reality.Deepfakes represent a new frontier of perceptual enclosure, where our very ability to witness truth is being challenged. “I wanted to investigate whether our innate personality traits provide a kind of natural defense or, on the contrary, vulnerability to the advanced systems that now dictate our digital environment,” explained study author Juneman Abraham, professor and vice president of research and technology transfer at BINUS University.
For the study, scientists focused on the HEXACO model of human personality. This framework categorizes human personality into six broad dimensions. These six dimensions include honesty, humility, emotionality, extraversion, agreeableness, integrity, and openness to experience.
Researchers had 200 young people from Indonesia participate in an online survey. The sample included 139 women and 61 men, all between the ages of 18 and 25, with an average age of just over 22 years. This particular age group was chosen because young people are very active online and frequently encounter digital media.
Participants completed a standardized 60-item questionnaire to measure six HEXACO personality traits. They also completed a custom questionnaire designed to assess specific self-efficacy in recognizing manipulated media.
This custom measure asked participants to rate their confidence in noticing unnatural elements in photos and videos. For example, participants rated how well they felt they could spot abnormal eye movements, mismatched skin tones, and awkward facial expressions that didn’t match the emotion being spoken.
Statistical analysis revealed that only two of the six personality traits significantly predicted a person’s confidence in detecting deepfakes. Specifically, honesty, humility, and agreeableness showed strong but contradictory relationships with deepfake detection self-efficacy.
Those who scored higher on honesty and humility tended to report lower confidence in their ability to spot deepfakes. This personality trait includes a reluctance to manipulate others and a general lack of interest in breaking rules or accumulating wealth.
Researchers suggest that people high in honesty and humility may be less receptive to manipulative techniques in general. As a result, you may become overwhelmed by the highly deceptive nature of deepfakes and question your own ability to identify deepfakes.
In contrast, people with high likability scores reported higher confidence in artificial intelligence’s ability to detect manipulation. Agreeableness reflects a tendency to be cooperative, trusting, and willing to compromise with others.
Scientists suggest that well-liked people may have more trust in collective intelligence and shared forensic tools. This collaborative mindset tends to increase confidence in leveraging community resources and the wisdom of the crowd to move safely through digital spaces.
“The most important lesson is that individual trust is an unreliable shield against systemic deception,” Abraham told SciPost. “In our study, we found that cooperativeness correlated with high self-efficacy, suggesting that people’s willingness to cooperate and trust – traits essential to social cohesion – are being directly tested by AI.”
“However, the negative correlation between honesty and humility alerts us that the most grounded and prudent people may actually feel the most vulnerable. The average person, especially in non-Western contexts where communal trust is an important social currency, needs to recognize that digital literacy is not just a technical skill, but a form of social resilience.”
The other four personality traits did not significantly predict self-efficacy. Emotionality, extraversion, conscientiousness, and openness to experience had no clear effect on young people’s self-confidence.
“We found that traits traditionally associated with ‘personal success’ in a market-driven society, such as honesty and openness, were not significantly predictive of people’s confidence in recognizing deepfakes,” Abraham said. “This suggests that individual ‘virtue’ is insufficient in the face of the scale of algorithmic deception. It highlights that the problem is not a lack of individual ‘effort’ or ‘intelligence’, but rather systemic asymmetries between the creators of these technologies and those who exploit them.”
Statistical analysis also revealed no significant differences in self-efficacy between men and women. Men and women reported similar levels of confidence in their ability to recognize manipulated digital media.
Although these findings provide insights into digital psychology, there are some limitations that should be kept in mind. The most important limitation is that this study measured subjective confidence rather than actual accuracy in detecting deepfakes.
People often overestimate their own skills, a psychological phenomenon known as the Dunning-Kruger effect. If someone is very confident in their detection ability, but tests it on real deepfake videos, there is a good chance that their actual performance will be poor.
“There is a great danger in the false sense of security that technology and individuality provide,” Abraham says. “Furthermore, our study was conducted among young people in Indonesia, which is important because non-Western societies often have different psychological responses to power and collective information compared to Western societies, which most AI research focuses on. By democratizing this research, we can help societies in the Global South build their own digital defense systems that are culturally relevant and resistant to external manipulation.”
Future research should test participants using actual deepfake media and compare participants’ perceived confidence to real-world accuracy. Scientists also recommend using randomized sampling techniques to see if these personality traits directly cause changes in attitudes toward digital media.
“My long-term goal is to solidify the digital psychoethics framework as a necessary response to the challenges of our time,” Abraham explained. “If you look at the trajectory of my publications on Google Scholar, there is a very consistent thread: an effort to understand human integrity within the context of structural pressures. My academic journey from intensive research on the psychology of corruption and academic integrity to the development of psychoinformatics is a logical evolution that addresses how human integrity is tested when AI begins to hijack real-life narratives.”
“I see deepfakes as not just a whim, but a new form of reality tunnel that threatens to erode human agency, especially in non-Western contexts and the Global South. Therefore, the commitment to open science that I have consistently advocated in various forums and writings , serves as a form of resistance against the commodification of truth. My aim is to ensure that psychological knowledge about AI mitigation does not become a tool for private monopolies or technological elites, but rather remains a public good that allows ordinary people to build collective resilience against the systematic manipulation of reality.”
“We must stop seeing AI deception as simply a technical flaw and start seeing it as a psychological challenge to human sovereignty in an increasingly automated world,” Abraham added. “At a time when our shared reality is being fragmented and sold to the highest bidder, maintaining public trust requires understanding the human vulnerabilities that these systems are designed to exploit.”
The study, “Predicting self-efficacy in deepfake recognition based on personality traits,” was authored by Juneman Abraham, Her Alamsha Putra, Tommy Prayoga, Haako Leslie Hendrik Spitz Warners, Rudi Hartono Manulun, and Togiaratua Nainggolan.

