Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    This simple Japanese diet could help you live longer without dieting

    April 2, 2026

    Intuition Robotics’ ElliQ social robot expands to Medicaid

    April 2, 2026

    Defining obesity or delaying treatment? New paper sparks medical controversy

    April 2, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Health Magazine
    • Home
    • Environmental Health
    • Health Technology
    • Medical Research
    • Mental Health
    • Nutrition Science
    • Pharma
    • Public Health
    • Discover
      • Daily Health Tips
      • Financial Health & Stability
      • Holistic Health & Wellness
      • Mental Health
      • Nutrition & Dietary Trends
      • Professional & Personal Growth
    • Our Mission
    Health Magazine
    Home » News » AI autocomplete suggestions quietly change the way you think about important topics
    Mental Health

    AI autocomplete suggestions quietly change the way you think about important topics

    healthadminBy healthadminApril 2, 2026No Comments7 Mins Read
    AI autocomplete suggestions quietly change the way you think about important topics
    Share
    Facebook Twitter Reddit Telegram Pinterest Email


    Artificial intelligence writing tools that predict and suggest our next words do more than just speed up typing. A new study provides evidence that interacting with biased autocomplete suggestions can covertly change a person’s fundamental attitudes about important social issues. The survey results were published in a magazine scientific progresssuggests that the subtle effects of these daily programs often bypass our conscious awareness.

    Artificial intelligence programs that leverage large-scale language models are increasingly being integrated into human communication. These technologies enhance the autocomplete functionality found in popular email clients, messaging applications, and word processors. As these tools become a standard part of daily life, scientists have become concerned about their potential to shape human cognition.

    Previous studies have shown that artificial intelligence can persuade people during direct interactions. This occurs when the program generates a persuasive essay or directly discusses a particular topic with the user. However, the researchers wanted to explore more subtle channels of influence in the digital environment.

    “Two reasons inspired my team and I to pursue the research question of whether exposure to biased AI autocomplete suggestions can change users’ attitudes toward social issues,” said study author Sterling Williams Ceci, a doctoral candidate at Cornell University, Merrill Presidential Scholar, and Robert S. Harrison University Scholar.

    “One is that we are surrounded by AI writing assistants that generate autocomplete suggestions in multiple contexts (Gmail, Google Docs, social media, etc.), but other research shows that LLM-generated text can represent politically biased viewpoints. On the other hand, older psychology research shows that changing people’s behavior through writing can change the way they think about issues. So these biased AI We thought that this proposal might induce a change in attitude through this mechanism.

    With millions of people using the same text prediction models every day, even small changes in an individual’s opinion can have far-reaching social effects. To test this idea, the researchers conducted two large-scale online experiments involving a total of 2,582 participants. They built a custom writing application that functions much like a standard word processor.

    In both experiments, participants were asked to write a short essay on a controversial topic. The first experiment involved 1,485 participants, all of whom wrote about the use of standardized tests in education. Some participants wrote without any assistance, serving as a baseline control group.

    Others provided autocomplete suggestions generated by the artificial intelligence model GPT-3.5. These suggestions are specifically programmed to facilitate standardized testing. As the participant types, short phrases of approximately 24 words appear on the screen, and the user can accept them into the essay by pressing the tab key.

    To rule out the possibility that the mere presence of new information would change their opinion, the third group in the first experiment did not use the autocomplete tool. Instead, I was shown a static list of arguments for an artificial intelligence program before I started writing. After the writing task, all participants completed a survey measuring their final opinion about the topic, along with some unrelated distracting topics.

    In psychology, distracting questions are used to hide the real purpose of the study. This prevents participants from guessing what the scientist is looking for and changing their responses unnaturally.

    The researchers found that participants who used a biased autocomplete tool reported attitudes closer to the artificial intelligence’s programmed biases. Their opinions changed by almost 0.5 points on a 5-point scale compared to the control group. This change also occurred among the approximately 30% of participants who did not actually accept the suggested words in their essays.

    Scientists also found that the interactive autocomplete feature had a more powerful effect than simply reading the same arguments presented as a static list. This provides evidence that the unique experience of co-authoring using artificial intelligence programs is a unique and powerful form of influence. This suggests that the act of typing along with a program, rather than simply reading text, shapes our thinking.

    “AI assistants that provide these autocomplete suggestions can make it easier and faster for us to write, but they also have implications. AI assistants can change the kind of language we use, change the topics we write about, and, as we’ve shown here, change the way we actually think about the issues we’re communicating,” Williams-Ceci told PsyPost. “We found that attitudes changed, even among participants who didn’t accept the suggestions to actually fill out the sentences. So even if people resist using the suggestions, just being exposed to them may be enough.”

    In a second experiment involving 1,097 participants, the researchers measured people’s baseline opinions several weeks before the actual writing task. This allowed scientists to precisely track the extent to which individuals’ attitudes changed over time. Participants in this experiment were randomly assigned to write about one of four topics: the death penalty, voting rights for felons, genetically modified organisms, and hydraulic fracturing.

    The artificial intelligence tool, this time using the more advanced GPT-4 model, is programmed to provide conservative or liberal suggestions depending on the topic. The researchers once again found that participants’ attitudes shifted from their original baseline position toward the biased perspective of artificial intelligence. No such changes were observed in the control group.

    Researchers observed a lack of awareness among participants. The majority of people exposed to biased suggestions said that artificial intelligence was rational and balanced. Most participants completely disagreed with the idea that writing assistants influenced their thinking and discussion.

    The researchers even attempted to reduce this effect in their second experiment by explicitly warning participants about the tool’s biases. Some were warned before they started writing, while others were debriefed immediately after. None of these interventions reduced the extent of participants’ attitude change.

    “We were very surprised to find that warning people before they were exposed to biased AI suggestions did not reduce the changes in their attitudes,” Williams-Ceci explained. “In our first experiment, people were mostly unaware of the bias in the suggestions or their influence, so in our second experiment we hypothesized that simply alerting people to the fact that the suggestions were biased would make them less likely to be influenced.”

    “We also hypothesized about this moderating effect, as similar interventions have shown success in the misinformation prevention literature. However, in our second experiment, neither warning people in advance nor reporting them afterwards had any effect on the attitude changes they experienced.”

    Although this study provides strong evidence of this covert effect, there are some limitations that should be considered. This study only measured the short-term effects of using a biased writing assistant. It remains unclear whether this change in attitude persists over weeks or months, or whether the effects may be exacerbated by repeated exposure over an extended period of time.

    “One important limitation to note is that our experiment was not designed to identify the specific cognitive mechanisms that explain why people’s attitudes change when they write using the AI’s biased suggestions,” Williams-Ceci noted. “We know these suggestions have something to do with the fact that people write about their opinions in a more biased way, because there is research in psychology showing that behavior can influence attitudes, but there are multiple theoretical explanations for why manipulating people’s writing might change attitudes.”

    Potential mechanisms include “cognitive dissonance responses, in which people consciously adjust their self-reported attitudes to match what they have written, or self-awareness theory arguments, in which people infer true attitudes from what they write, and bias-scanning arguments, in which biased perspectives become more accessible in people’s working memory.”

    “The hope is that if future research can determine exactly why these changes in attitudes are occurring, we will be able to find interventions that are more effective in preserving people’s autonomy,” Williams-Cesi continued.

    “Our team is interested in learning more about the mechanisms behind attitude change and how to prevent or reduce it. It is alarming that telling people about the biases in AI suggestions did not reliably reduce the scope of their effects. We suspect that for these interventions to work, people need to be faced with them in the moment, alongside biased suggestions.”

    The study, “Biased AI writing assistants change users’ attitudes toward social issues,” was authored by Sterling Williams Sethi, Maurice Jakesh, Advait Bhat, Kowe Kadma, Lior Zalmanson, and Mo Naaman.



    Source link

    Visited 1 times, 1 visit(s) today
    Share. Facebook Twitter Pinterest LinkedIn Telegram Reddit Email
    Previous ArticleAmerican Heart Association’s 9 dietary rules to lower your risk of heart disease
    Next Article Study finds higher cancer rates in counties with more CAFOs
    healthadmin

    Related Posts

    Navigating 2026 Mental Health Policy Shifts: Impacts on Infrastructure and Access

    April 2, 2026

    Psychology researchers have determined the best time to send a text message after a first date

    April 2, 2026

    The neuroscience of hypocrisy points to communication failures in the brain

    April 2, 2026

    How generative artificial intelligence is upending the theory of political persuasion

    April 1, 2026

    Scientists use brain measurements to identify videos that significantly reduce racial bias

    April 1, 2026

    Simple mindfulness practices can speed up visual processing in adults

    April 1, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Categories

    • Daily Health Tips
    • Discover
    • Environmental Health
    • Exercise & Fitness
    • Featured
    • Featured Videos
    • Financial Health & Stability
    • Fitness
    • Fitness Updates
    • Health
    • Health Technology
    • Healthy Aging
    • Healthy Living
    • Holistic Healing
    • Holistic Health & Wellness
    • Medical Research
    • Medical Research & Insights
    • Mental Health
    • Mental Wellness
    • Natural Remedies
    • New Workouts
    • Nutrition
    • Nutrition & Dietary Trends
    • Nutrition & Superfoods
    • Nutrition Science
    • Pharma
    • Preventive Healthcare
    • Professional & Personal Growth
    • Public Health
    • Public Health & Awareness
    • Selected
    • Sleep & Recovery
    • Top Programs
    • Weight Management
    • Workouts
    Popular Posts
    • the-pros-and-cons-of-paleo-dietsThe Pros and Cons of Paleo Diets: What Science Really Says April 16, 2025
    • Improve Mental Health10 Science-Backed Practices to Improve Mental Health… March 11, 2025
    • How Healthy Living Is Transforming Modern Wellness TrendsHow Healthy Living Is Transforming Modern Wellness… December 3, 2025
    • Healthy Living: Expert Tips to Improve Your Health in 2026Healthy Living: Expert Tips to Improve Your Health in 2026 November 16, 2025
    • "The Best Daily Health Apps to Track Your Wellness Goals"The Best Daily Health Apps to Track Your Wellness… August 15, 2025
    • daily vitamin D needsWhy Sunlight Is Crucial for Your Daily Vitamin D Needs June 12, 2025

    Demo
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss

    This simple Japanese diet could help you live longer without dieting

    By healthadminApril 2, 2026

    Some of the world’s longest-living, healthiest people follow a simple philosophy: hara hachi bu. This…

    Intuition Robotics’ ElliQ social robot expands to Medicaid

    April 2, 2026

    Defining obesity or delaying treatment? New paper sparks medical controversy

    April 2, 2026

    Earth’s magnetic field went wild 600 million years ago, and scientists have finally figured out why.

    April 2, 2026

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    HealthxMagazine
    HealthxMagazine

    At HealthX Magazine, we are dedicated to empowering entrepreneurs, doctors, chiropractors, healthcare professionals, personal trainers, executives, thought leaders, and anyone striving for optimal health.

    Our Picks

    Earth’s magnetic field went wild 600 million years ago, and scientists have finally figured out why.

    April 2, 2026

    Navigating 2026 Mental Health Policy Shifts: Impacts on Infrastructure and Access

    April 2, 2026

    Psychology researchers have determined the best time to send a text message after a first date

    April 2, 2026
    New Comments
      Facebook X (Twitter) Instagram Pinterest
      • Home
      • Privacy Policy
      • Our Mission
      © 2026 ThemeSphere. Designed by ThemeSphere.

      Type above and press Enter to search. Press Esc to cancel.