Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    This simple 3-amino acid trick makes mRNA therapy 20x more effective

    April 20, 2026

    April 20, 2026

    New geometric brain markers predict surgical success in iNPH patients

    April 20, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Health Magazine
    • Home
    • Environmental Health
    • Health Technology
    • Medical Research
    • Mental Health
    • Nutrition Science
    • Pharma
    • Public Health
    • Discover
      • Daily Health Tips
      • Financial Health & Stability
      • Holistic Health & Wellness
      • Mental Health
      • Nutrition & Dietary Trends
      • Professional & Personal Growth
    • Our Mission
    Health Magazine
    Home » News » New study finds people are still ‘blessedly ignorant’ about the use of AI in everyday messages
    Mental Health

    New study finds people are still ‘blessedly ignorant’ about the use of AI in everyday messages

    healthadminBy healthadminApril 20, 2026No Comments8 Mins Read
    New study finds people are still ‘blessedly ignorant’ about the use of AI in everyday messages
    Share
    Facebook Twitter Reddit Telegram Pinterest Email


    Recent research published in Computers in human behavior found that people judge others more harshly when they know a message was written using artificial intelligence. However, individuals tend to be completely unaware that artificial intelligence can be used in everyday situations. If recipients are left unaware of how a message was created, they will assume it was written by a human and develop a positive impression of the sender.

    Generative artificial intelligence refers to computer programs that can generate realistic, human-like text based on simple user instructions. More and more people are using these tools (such as Claude, ChatGPT, and Gemini) to draft emails, social media posts, and text messages. Scientists Jiaqi Zhu and Andras Molnar wanted to investigate how reliance on these programs affects the way we see each other in daily life.

    Writing thoughtful messages usually takes time and mental energy. These efforts demonstrate the sender’s sincerity and investment in the relationship. Using text generators eliminates this hassle, so the researchers wanted to know whether using these tools made people more distrustful of the messages they received.

    Previous research has shown that people judge communicators more negatively when they learn that a message was generated by artificial intelligence. But in the real world, few people would admit to using a computer program to write an email. Zhu and Molnar conducted a study to examine how people form impressions in realistic situations where the use of artificial intelligence remains secret or uncertain.

    “Since the release of ChatGPT in late 2022, discussions about generative AI have become inevitable in academic settings. For most instructors, detecting and regulating the use of AI is now part of their job, and in this situation, caution has descended into complete paranoia. Some instructors may even be eager to load potentially fully human AI into their writing, as evidenced by the growing number of high-profile lawsuits against universities over failing or failing students.” “They were expelled based on suspicion of using AI,” said study author András Molnar, assistant professor of psychology at the University of Michigan.

    “However, in conversations with people outside academia, we realized that we may be living in a bubble. What is felt on a daily basis in academia may not reflect how people think elsewhere. That was the motivation for our research. We wanted to understand whether people are suspicious of the use of AI in everyday situations such as emails, text messages, and social media profiles.”

    To investigate these questions, Zhu and Molnar conducted two online experiments. In the first experiment, researchers recruited 647 U.S. adults and asked them to read a fictitious email. Participants were randomly assigned to read one of four types of messages. These include a thank you email from a friend, a job application from a nanny, a cover letter from a data analyst, project feedback from a colleague, and more.

    The scientists divided participants into four groups and gave each group different information about how to write emails. One group was told that the sender had written the message entirely themselves. Another group was told that the sender used an artificial intelligence chatbot to create the accurate text.

    A third group was told that they did not know whether the message was written by a human or generated by artificial intelligence. The last group received no information about the source of the message. This last group mimics how we typically receive emails in real life.

    After reading the email, participants rated their social impression of the sender based on 10 personal characteristics. These characteristics include friendliness, honesty, trustworthiness, and trustworthiness. The researchers found that participants who knew that artificial intelligence was used to create the message rated the sender more negatively.

    This finding confirms that explicitly disclosing the use of artificial intelligence damages an individual’s social reputation. The researchers also analyzed the words participants used to describe their first impressions of the sender. When the artificial intelligence was exposed, participants used fewer positive words and more negative words to describe the sender.

    However, even when participants received no information about how the message was created, they rated the sender as positively as they would if they knew a human had written the message. The scientists noted that participants in this group did not show natural suspicion. Even in the uncertain group, where computer-assisted possibilities were emphasized, participants formed impressions that were much closer to the human-written group than to the artificial intelligence group.

    “In these everyday interactions, people are very reluctant to receive AI-generated messages from other people,” Molnar told PsyPost. “For example, AI-generated apologies, no matter how sophisticated, are undesirable because they sound inauthentic and empty. Outsourcing highly personal communications to an AI can feel like a betrayal and even show disrespect.”

    “However, this ‘AI penalty’ appears to apply only when someone knows or strongly suspects that an AI was used to write the message. What our research shows is that in the absence of explicit disclosure (e.g., a label indicating the use of an AI), people typically do not suspect AI in everyday situations and treat these messages as if they were written entirely by humans.”

    The researchers conducted a second experiment seven months later to see if increased public awareness of these text-generating programs would increase natural skepticism. They collected a new sample of 654 adults in the United States. This time, we’ve updated the scenario to include a more diverse range of communication styles. New scenarios included social media posts about summer internships, text messages apologizing for canceled dinners, and detailed online dating profiles.

    In this second experiment, the scientists asked participants to estimate how much time and mental effort the sender spent on the message. The researchers also asked how accurately the texts reflected the sender’s true feelings. Participants who were told that the text was generated by a computer program rated it lower on all three measures.

    For the group that received no information about the sender of the message, we assumed that participants expended the same amount of mental effort as if the sender had been identified as a human writer. The researchers found that a lack of mental effort and reflex accuracy fully explained why participants penalized artificial intelligence users. The results of the second experiment perfectly replicated the results of the first study, showing that people remain blissfully ignorant about the use of artificial intelligence.

    “What surprised us most was that people who were heavy users of generated AI themselves (frequently sent AI-generated or AI-edited messages) were less likely to suspect that others were using AI,” Molnar said. “We expected people to become more skeptical as they had more experience with these tools, but that hasn’t been the case. In other words, getting used to AI doesn’t automatically make you more suspicious in your everyday interactions.”

    “This finding is important because it suggests that people can outsource their writing to AI with relatively little risk of detection or suspicion. This creates an uneven playing field. Those who are unwilling or unable to use AI are at a disadvantage. Heavy users, on the other hand, can appear clearer, more sophisticated, and more effective without incurring negative perceptions unless they admit they have used AI.And why would they do so?

    When discussing the findings, the scientists highlighted potential misconceptions about what participants were actually assessing. Molnar explained that the study was designed to measure how people judge the author of a message, rather than how they judge the quality or effectiveness of the message itself. The focus was solely on the social impressions formed about the person on the other side of the screen.

    This study also has some limitations that provide avenues for future research. Because the experiment was based on a hypothetical scenario, it is possible that participants would react differently in a real-life situation with real stakes. The researchers also tested using artificial intelligence completely, rather than in parts, by simply editing a few sentences using a program.

    Because this study focuses on one-way communication, it is unclear how people react during live back-and-forth conversations. Additionally, this study only included participants from the United States. Researchers are particularly interested in exploring what specific situations in everyday life trigger suspicion.

    “Our next step is to understand what triggers wariness and suspicion. What flips the switch between everyday communication and situations like academia, where people are more aware of the potential uses of AI? Our current research already suggests that it’s not just a matter of exposure and familiarity with these tools, as even heavy users of AI are less likely to be suspicious of others,” Molnar said.

    “So we’re currently testing alternative explanations, such as whether high-stakes situations (grades, recruitment, evaluations) definitely increase vigilance, or whether people become more skeptical only after a negative personal experience that teaches them to be careful about using AI. We also want to collect data in other countries (the current experiment was conducted in the United States) to see if there is a difference between skepticism and vigilance.”

    The study, “Blissful (A) Ignorance: Despite the prevalence of AI in communication, people do not doubt its use in real-world situations” was authored by Jiaqi Zhu and Andras Molnar.



    Source link

    Visited 2 times, 2 visit(s) today
    Share. Facebook Twitter Pinterest LinkedIn Telegram Reddit Email
    Previous ArticleHundreds of millions of people are at risk as river deltas sink faster than sea levels rise
    Next Article Scientists stunned as bacteria rewire their DNA machinery to shape cells
    healthadmin

    Related Posts

    Study links internalized porn standards to incel men’s body image issues

    April 20, 2026

    Navigating 2026 Mental Health Infrastructure Challenges: Policy Shifts and Access Barriers

    April 20, 2026

    Study finds that listening to bad music makes you crave sugar

    April 20, 2026

    Belief in ‘chemical imbalance’ may cause patients to continue taking antidepressants for long periods of time

    April 19, 2026

    Can common parasitic drugs calm the brain’s stress circuits during alcohol withdrawal?

    April 19, 2026

    Childhood trauma and attachment style show a tenuous relationship with alternative sexual preferences

    April 19, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Categories

    • Daily Health Tips
    • Discover
    • Environmental Health
    • Exercise & Fitness
    • Featured
    • Featured Videos
    • Financial Health & Stability
    • Fitness
    • Fitness Updates
    • Health
    • Health Technology
    • Healthy Aging
    • Healthy Living
    • Holistic Healing
    • Holistic Health & Wellness
    • Medical Research
    • Medical Research & Insights
    • Mental Health
    • Mental Wellness
    • Natural Remedies
    • New Workouts
    • Nutrition
    • Nutrition & Dietary Trends
    • Nutrition & Superfoods
    • Nutrition Science
    • Pharma
    • Preventive Healthcare
    • Professional & Personal Growth
    • Public Health
    • Public Health & Awareness
    • Selected
    • Sleep & Recovery
    • Top Programs
    • Weight Management
    • Workouts
    Popular Posts
    • the-pros-and-cons-of-paleo-dietsThe Pros and Cons of Paleo Diets: What Science Really Says April 16, 2025
    • Improve Mental Health10 Science-Backed Practices to Improve Mental Health… March 11, 2025
    • How Healthy Living Is Transforming Modern Wellness TrendsHow Healthy Living Is Transforming Modern Wellness… December 3, 2025
    • Kankakee_expansion.jpgCSL releases details of $1.5 billion U.S.… March 10, 2026
    • urlhttps3A2F2Fcalifornia-times-brightspot.s3.amazonaws.com2Fc32Fcd2F988500d440f2a55515940909.jpegA ‘reckless’ scrapyard with a history of… October 24, 2025
    • Healthy Living: Expert Tips to Improve Your Health in 2026Healthy Living: Expert Tips to Improve Your Health in 2026 November 16, 2025

    Demo
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss

    This simple 3-amino acid trick makes mRNA therapy 20x more effective

    By healthadminApril 20, 2026

    Lipid nanoparticles (LNPs) are best known for their role in the delivery of COVID-19 mRNA…

    April 20, 2026

    New geometric brain markers predict surgical success in iNPH patients

    April 20, 2026

    Study links internalized porn standards to incel men’s body image issues

    April 20, 2026

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    HealthxMagazine
    HealthxMagazine

    At HealthX Magazine, we are dedicated to empowering entrepreneurs, doctors, chiropractors, healthcare professionals, personal trainers, executives, thought leaders, and anyone striving for optimal health.

    Our Picks

    Study links internalized porn standards to incel men’s body image issues

    April 20, 2026

    AI swarms could take over democracy without anyone noticing

    April 20, 2026

    Navigating 2026 Mental Health Infrastructure Challenges: Policy Shifts and Access Barriers

    April 20, 2026
    New Comments
      Facebook X (Twitter) Instagram Pinterest
      • Home
      • Privacy Policy
      • Our Mission
      © 2026 ThemeSphere. Designed by ThemeSphere.

      Type above and press Enter to search. Press Esc to cancel.