Artificial intelligence could be used to generate deceptive videos that damage politicians’ reputations, even if viewers suspect the footage is fake. A new study published in Communication Research found that these manipulated clips reduced support for the targeted candidates. Standard fact-checking efforts reportedly cannot fully undo the reputational damage.
Disinformation created using artificial intelligence is often seen as a major threat to global elections. Technology has made it possible for malicious attackers to seamlessly replace a person’s face or clone their voice. These creations are commonly referred to as deepfakes. Political operatives can use these tools to make it appear that opposing candidates are saying something outrageous or offensive.
Michael Hameleas, a communication researcher at the University of Amsterdam, led a team that investigated how these videos affect the public. Hameleers and colleagues Toni GLA van der Meer, Marina Tulin and Tom Dobber wanted to track voter responses over time. They aimed to discover whether these manipulated videos actually influence people’s minds during election cycles.
Visual information is known to have a significant impact on human perception. Video evidence often bypasses the usual skepticism because people are used to believing their own eyes. The research team compared this visual ability with the brain’s tendency to detect discrepancies. They wanted to know whether completely uncharacteristic statements would invalidate the visual evidence of realistic video.
Processing fluency is a psychological concept that describes the ease with which information is understood. When media is easy to consume, people tend to accept it easily without critical thinking. The researchers thought that the realistic video format might encourage this mental shortcut, making it easier to understand lies. They wanted to measure whether a smooth presentation could hide obvious falsehoods.
The team conducted tests across two contrasting political landscapes. The United States is characterized by a highly polarized two-party system and has historically been vulnerable to right-wing disinformation. The Netherlands operates under a multi-party system, with generally high trust in the press, providing a more resilient media environment.
Researchers recruited more than 3,000 adults in both countries. They planned a three-part experiment that would take place over a full week in 2021. Participants initially answered questions, were contacted again two days later, and completed the final survey three days later.
During the study, participants were randomly assigned to watch either a real political speech or a manipulated video. In the US, a doctored video featured Congresswoman Nancy Pelosi. The synthetic voice sounded as if she was sympathizing with the mob that invaded the U.S. Capitol and suggested that Americans needed to fight to take back their homeland.
In the Netherlands, the team selected a moderate Christian Democrat politician named Sibrand Buma. The doctored footage showed him delivering a radical anti-immigrant monologue about defending Dutch traditions from foreign influence. The messages were designed to completely contradict the established public personas of the two targets.
The project also tested potential defenses against digital fraud. Some participants read media literacy warnings before watching media. This introductory alert asked questions of news sources and provided specific tips on how to spot fabricated news items online.
Another group was fact-checked immediately after watching the video, and false claims were clearly corrected. The correction message provided a point-by-point rebuttal to the statements made in the video. These interventions closely mimicked the format used by professional journalism organizations.
The researchers evaluated the results and found that the audience was largely able to see through the deception. Participants in both countries rated the altered videos as much less believable than the authentic articles. Based on the strangeness of the statement, viewers likely sensed that something was wrong with the video.
Despite the structural differences between the two countries, psychological trends were surprisingly consistent. Voters in the polarized American system and the consensus-driven Dutch system had roughly the same reaction to the composite video. The broad similarities suggest that vulnerability to artificial media transcends cultural boundaries.
Even though people correctly suspected the video was fake, ratings for the politician still declined. Deepfakes were successful in damaging the reputations of Pelosi and Boomer. This finding highlights the psychological disconnect between assessing a video’s credibility and absorbing its emotional weight.
In fact, the people who suffered the most serious reputational damage were those who supported the politicians who were initially targeted. Seeing a well-liked leader express clearly extreme or contradictory opinions provoked an immediate negative reaction. People who already disliked the politician did not change their assessment much, mainly because their opinion was already completely negative.
Deepfakes changed people’s impressions of specific politicians, but not their overall political beliefs. U.S. participants did not suddenly support the Capitol riot after watching Pelosi’s video. This deception changed judgments about the individual messenger, not the message itself.
The hope was that by showing deceptive images over and over again, the illusion of truth would be induced, so that repeated falsehoods would eventually feel familiar and accurate. In this experiment, watching the video twice increased the reputational damage for the American participants. But with this repetition, the outlandish claims no longer seemed believable.
The effects of viewing fabricated media were mostly temporary in both groups. By the end of the week, the negative sentiment directed at politicians had largely dissipated. This result suggests that a period of natural recovery occurs when people move away from misinformation in an isolated experiment.
Defensive interventions had mixed results for the audiences tested. Fact-checking the video made participants even less likely to believe the footage was real. But those very same fact-checks could not fully repair the psychological damage done to politicians’ reputations. Media literacy warnings had almost no measurable impact.
The study authors noted several limitations regarding video selection. The selected clips contained extreme variations in political rhetoric, making it easier to spot deception. Future projects may test subtle changes to see if highly plausible fakes completely evade human suspicion.
The edited video also contained minor visual defects. Voice actors are used to simulate politicians, and attentive viewers may be able to detect slightly unnatural sounds. As generation tools continue to evolve, these sensory deficiencies will likely disappear.
The researchers recommend future testing during live political campaigns. Tracking real-world responses to actual digital propaganda will reveal how voters process the media alongside competing news coverage. Such experiments could establish better boundaries for how exactly artificial intelligence shapes modern democracies.
The study, “Radical Right Political Deepfakes Successfully Delegitimize Targeted Political Activists: Evidence from a Three-Wave Experiment in the United States and the Netherlands,” was authored by Michael Hameleas, Toni GLA van der Meer, Marina Tulin, and Tom Dobber.

