Artificial intelligence writing tools that predict and suggest our next words do more than just speed up typing. A new study provides evidence that interacting with biased autocomplete suggestions can covertly change a person’s fundamental attitudes about important social issues. The survey results were published in a magazine scientific progresssuggests that the subtle effects of these daily programs often bypass our conscious awareness.
Artificial intelligence programs that leverage large-scale language models are increasingly being integrated into human communication. These technologies enhance the autocomplete functionality found in popular email clients, messaging applications, and word processors. As these tools become a standard part of daily life, scientists have become concerned about their potential to shape human cognition.
Previous studies have shown that artificial intelligence can persuade people during direct interactions. This occurs when the program generates a persuasive essay or directly discusses a particular topic with the user. However, the researchers wanted to explore more subtle channels of influence in the digital environment.
“Two reasons inspired my team and I to pursue the research question of whether exposure to biased AI autocomplete suggestions can change users’ attitudes toward social issues,” said study author Sterling Williams Ceci, a doctoral candidate at Cornell University, Merrill Presidential Scholar, and Robert S. Harrison University Scholar.
“One is that we are surrounded by AI writing assistants that generate autocomplete suggestions in multiple contexts (Gmail, Google Docs, social media, etc.), but other research shows that LLM-generated text can represent politically biased viewpoints. On the other hand, older psychology research shows that changing people’s behavior through writing can change the way they think about issues. So these biased AI We thought that this proposal might induce a change in attitude through this mechanism.
With millions of people using the same text prediction models every day, even small changes in an individual’s opinion can have far-reaching social effects. To test this idea, the researchers conducted two large-scale online experiments involving a total of 2,582 participants. They built a custom writing application that functions much like a standard word processor.
In both experiments, participants were asked to write a short essay on a controversial topic. The first experiment involved 1,485 participants, all of whom wrote about the use of standardized tests in education. Some participants wrote without any assistance, serving as a baseline control group.
Others provided autocomplete suggestions generated by the artificial intelligence model GPT-3.5. These suggestions are specifically programmed to facilitate standardized testing. As the participant types, short phrases of approximately 24 words appear on the screen, and the user can accept them into the essay by pressing the tab key.
To rule out the possibility that the mere presence of new information would change their opinion, the third group in the first experiment did not use the autocomplete tool. Instead, I was shown a static list of arguments for an artificial intelligence program before I started writing. After the writing task, all participants completed a survey measuring their final opinion about the topic, along with some unrelated distracting topics.
In psychology, distracting questions are used to hide the real purpose of the study. This prevents participants from guessing what the scientist is looking for and changing their responses unnaturally.
The researchers found that participants who used a biased autocomplete tool reported attitudes closer to the artificial intelligence’s programmed biases. Their opinions changed by almost 0.5 points on a 5-point scale compared to the control group. This change also occurred among the approximately 30% of participants who did not actually accept the suggested words in their essays.
Scientists also found that the interactive autocomplete feature had a more powerful effect than simply reading the same arguments presented as a static list. This provides evidence that the unique experience of co-authoring using artificial intelligence programs is a unique and powerful form of influence. This suggests that the act of typing along with a program, rather than simply reading text, shapes our thinking.
“AI assistants that provide these autocomplete suggestions can make it easier and faster for us to write, but they also have implications. AI assistants can change the kind of language we use, change the topics we write about, and, as we’ve shown here, change the way we actually think about the issues we’re communicating,” Williams-Ceci told PsyPost. “We found that attitudes changed, even among participants who didn’t accept the suggestions to actually fill out the sentences. So even if people resist using the suggestions, just being exposed to them may be enough.”
In a second experiment involving 1,097 participants, the researchers measured people’s baseline opinions several weeks before the actual writing task. This allowed scientists to precisely track the extent to which individuals’ attitudes changed over time. Participants in this experiment were randomly assigned to write about one of four topics: the death penalty, voting rights for felons, genetically modified organisms, and hydraulic fracturing.
The artificial intelligence tool, this time using the more advanced GPT-4 model, is programmed to provide conservative or liberal suggestions depending on the topic. The researchers once again found that participants’ attitudes shifted from their original baseline position toward the biased perspective of artificial intelligence. No such changes were observed in the control group.
Researchers observed a lack of awareness among participants. The majority of people exposed to biased suggestions said that artificial intelligence was rational and balanced. Most participants completely disagreed with the idea that writing assistants influenced their thinking and discussion.
The researchers even attempted to reduce this effect in their second experiment by explicitly warning participants about the tool’s biases. Some were warned before they started writing, while others were debriefed immediately after. None of these interventions reduced the extent of participants’ attitude change.
“We were very surprised to find that warning people before they were exposed to biased AI suggestions did not reduce the changes in their attitudes,” Williams-Ceci explained. “In our first experiment, people were mostly unaware of the bias in the suggestions or their influence, so in our second experiment we hypothesized that simply alerting people to the fact that the suggestions were biased would make them less likely to be influenced.”
“We also hypothesized about this moderating effect, as similar interventions have shown success in the misinformation prevention literature. However, in our second experiment, neither warning people in advance nor reporting them afterwards had any effect on the attitude changes they experienced.”
Although this study provides strong evidence of this covert effect, there are some limitations that should be considered. This study only measured the short-term effects of using a biased writing assistant. It remains unclear whether this change in attitude persists over weeks or months, or whether the effects may be exacerbated by repeated exposure over an extended period of time.
“One important limitation to note is that our experiment was not designed to identify the specific cognitive mechanisms that explain why people’s attitudes change when they write using the AI’s biased suggestions,” Williams-Ceci noted. “We know these suggestions have something to do with the fact that people write about their opinions in a more biased way, because there is research in psychology showing that behavior can influence attitudes, but there are multiple theoretical explanations for why manipulating people’s writing might change attitudes.”
Potential mechanisms include “cognitive dissonance responses, in which people consciously adjust their self-reported attitudes to match what they have written, or self-awareness theory arguments, in which people infer true attitudes from what they write, and bias-scanning arguments, in which biased perspectives become more accessible in people’s working memory.”
“The hope is that if future research can determine exactly why these changes in attitudes are occurring, we will be able to find interventions that are more effective in preserving people’s autonomy,” Williams-Cesi continued.
“Our team is interested in learning more about the mechanisms behind attitude change and how to prevent or reduce it. It is alarming that telling people about the biases in AI suggestions did not reliably reduce the scope of their effects. We suspect that for these interventions to work, people need to be faced with them in the moment, alongside biased suggestions.”
The study, “Biased AI writing assistants change users’ attitudes toward social issues,” was authored by Sterling Williams Sethi, Maurice Jakesh, Advait Bhat, Kowe Kadma, Lior Zalmanson, and Mo Naaman.

