Artificial intelligence programs can persuade people to moderate their political views, but highly customized messages and deep conversations with bots are unlikely to work any better than a single basic argument. These results challenge long-held academic theories about what makes political messages effective and suggest that targeted data and interactive discussions may not provide the benefits politicians expect. The findings were recently published in the Proceedings of the National Academy of Sciences.
Changing the minds of voters is an essential feature of democratic societies. Advocacy groups, public health officials, and political candidates spend billions of dollars trying to sway public opinion on polarizing topics. Despite decades of research, it remains difficult to pinpoint the exact psychological processes that determine whether a person changes their mind. Academic researchers often face practical limitations when studying how social communication works in the real world.
Two central concepts have dominated academic understanding of targeted messaging. The first is message customization, also known in politics as microtargeting. This theory proposes that messages are more effective when they are explicitly tailored to the recipient’s personal characteristics, values, and demographics. The central idea is that the persuader should adapt his message to the audience, rather than expecting the audience to adapt to the message.
The second concept is known as the elaboration likelihood model. This model suggests that when people expend significant cognitive effort, they are more likely to experience lasting attitude changes. In other words, if a person has to actively think about a topic in a conversation, ask questions, and assert their opinion, they will become more permanently upset than if they simply read a static flyer.
Historically, it has been surprisingly difficult to separate these two mechanisms in a laboratory setting. Human researchers and actors participating in experiments introduce undesirable variables into the interaction. Human accomplices may change their tone of voice, show subtle facial expressions, or introduce social pressures that change how subjects form their opinions.
Lisa P. Argyle, a political scientist at Brigham Young University, led a team of researchers hoping to solve just this methodological problem. Argyle collaborated with Brigham Young University colleagues Ethan C. Busby, Joshua R. Gubler, Alex Lyman, Justin Alcott, Jackson Pond, and David Wingate. They theorized that generative artificial intelligence could act as a fully controlled debate partner for human subjects.
By using a large-scale language model, the research team was able to generate text with a consistent tone and style for thousands of different interactions. This allowed us to separate the effects of customization and cognitive elaboration without cumbersome interference with human social dynamics. They wanted to know whether highly customized messages and interactive chats would actually outperform a single, well-written general discussion.
To answer this question, the research team designed two preregistered online survey experiments with approximately 3,700 adult participants in the United States. The researchers recruited respondents who closely matched census averages in age, gender, and race. It also ensured an even balance of political ideology, including an equal number of Democrats and Republicans.
The first study focused on the controversial topic of immigration. Participants answered a series of questions, including their support for increasing border security spending and their opinions on sponsoring immigrant visas. The second study focused on the curriculum used in public education settings. Specifically, the study asked participants how much control parents should have over controversial social topics and whether teachers should bring their personal political views into the classroom.
After establishing these baseline opinions, the researchers randomly assigned participants to either the control group or one of the four experimental interventions. All experimental interventions used extensive language models to persuade participants to change their minds. The purpose of the bot was always to refute the participants’ original beliefs.
The first experimental group received one general message. The software was instructed to act as an expert and write the strongest possible paragraph arguing for an opposing political view. This text is not tailored for reading by any particular person.
The second group received micro-targeted messages. In this scenario, the artificial intelligence was populated with all demographic data provided by participants at the beginning of the study. The bot used this background information to create highly personalized arguments, testing the modern concept of customized political campaigns.
The third group had a direct interactive discussion. Participants had to exchange six conversational turns with an artificial intelligence program. The bot is instructed to act as a psychology expert, providing rebuttals and asking follow-up questions to force deep cognitive engagement from participants.
The final experimental group participated in an interactive motivational interview. Motivational interviewing is a psychological technique often used in therapy to help people find their internal motivation to change their own behavior. Rather than engage in direct discussion with participants, the bot asked reflective questions aimed at convincing respondents to adopt a new perspective.
To verify the integrity of the experiment, the researchers performed a secondary evaluation on the text generated by the bot. They used machine learning techniques to map the core arguments in every message. This ensured that the basic facts and arguments remained the same for all groups, only the style of presentation changed.
The final results contradicted expectations set by decades of academic literature. Overall, participants moderated their opinions when they encountered opposing arguments. On average, respondents changed their political attitudes by approximately 2.5 to 4 percentage points in the direction of the opposite argument.
What was surprising was that advanced techniques did not perform better than basic approaches. Personalized messages and interactive chats failed to produce greater attitude changes than a single general message. In fact, motivational interviewing techniques were often the least effective method evaluated during trials.
These numbers suggest that customization and cognitive elaboration may not be the powerful psychological tools campaign strategists assume. Even if political microtargeting offers an advantage, it is very small. Simple, generally persuasive arguments appear to be just as effective as customized digital debates.
The researchers tracked a second-order outcome called democratic reciprocity. This measure captures whether a person is willing to view political opponents as respectable and rational people. For years, academics have debated whether moderating one individual’s issue-based opinion automatically reduces overall bias against opposing groups.
This study provided a relatively clear answer to this secondary question. Although many participants moderated their actual policy opinions, this change rarely translated into increased respect for the other side. Although the ideological gap between voters has narrowed, their hostility toward opposing political groups remains the same.
The only exception occurred during an interactive chat about public school curriculum. In that particular setting, participants did show increased democratic reciprocity. Researchers believe this may have happened because the bots explicitly advocated the need for social tolerance as part of their educational curriculum talking points.
The researchers note that these immediate findings should not be interpreted as the final word on political communication. In our experiments, we only investigated short interactions that occur in an isolated digital environment. Personalization and cognitive elaboration can work much more effectively over a period of months or years.
Additionally, personal connections between real humans may rely on social pressures that artificial intelligence cannot easily imitate. A deeply grounded argument from a close friend can evoke a different psychological response than a similar argument presented by an anonymous survey tool. The researchers hope to explore these boundaries in future studies.
Ultimately, this project demonstrated that generative artificial intelligence can be a highly effective tool for researching social sciences. Creating customized arguments for thousands of subjects, if done entirely by humans, would require significant staffing and financial resources. This software has enabled academic teams to evaluate influential theories on a scale not previously possible.
The study, “Testing theories of political persuasion using AI,” was authored by Lisa P. Argyle, Ethan C. Busby, Joshua R. Gubler, Alex Lyman, Justin Olcott, Jackson Pond, and David Wingate.

