Recent research published in Computers in human behavior found that people judge others more harshly when they know a message was written using artificial intelligence. However, individuals tend to be completely unaware that artificial intelligence can be used in everyday situations. If recipients are left unaware of how a message was created, they will assume it was written by a human and develop a positive impression of the sender.
Generative artificial intelligence refers to computer programs that can generate realistic, human-like text based on simple user instructions. More and more people are using these tools (such as Claude, ChatGPT, and Gemini) to draft emails, social media posts, and text messages. Scientists Jiaqi Zhu and Andras Molnar wanted to investigate how reliance on these programs affects the way we see each other in daily life.
Writing thoughtful messages usually takes time and mental energy. These efforts demonstrate the sender’s sincerity and investment in the relationship. Using text generators eliminates this hassle, so the researchers wanted to know whether using these tools made people more distrustful of the messages they received.
Previous research has shown that people judge communicators more negatively when they learn that a message was generated by artificial intelligence. But in the real world, few people would admit to using a computer program to write an email. Zhu and Molnar conducted a study to examine how people form impressions in realistic situations where the use of artificial intelligence remains secret or uncertain.
“Since the release of ChatGPT in late 2022, discussions about generative AI have become inevitable in academic settings. For most instructors, detecting and regulating the use of AI is now part of their job, and in this situation, caution has descended into complete paranoia. Some instructors may even be eager to load potentially fully human AI into their writing, as evidenced by the growing number of high-profile lawsuits against universities over failing or failing students.” “They were expelled based on suspicion of using AI,” said study author András Molnar, assistant professor of psychology at the University of Michigan.
“However, in conversations with people outside academia, we realized that we may be living in a bubble. What is felt on a daily basis in academia may not reflect how people think elsewhere. That was the motivation for our research. We wanted to understand whether people are suspicious of the use of AI in everyday situations such as emails, text messages, and social media profiles.”
To investigate these questions, Zhu and Molnar conducted two online experiments. In the first experiment, researchers recruited 647 U.S. adults and asked them to read a fictitious email. Participants were randomly assigned to read one of four types of messages. These include a thank you email from a friend, a job application from a nanny, a cover letter from a data analyst, project feedback from a colleague, and more.
The scientists divided participants into four groups and gave each group different information about how to write emails. One group was told that the sender had written the message entirely themselves. Another group was told that the sender used an artificial intelligence chatbot to create the accurate text.
A third group was told that they did not know whether the message was written by a human or generated by artificial intelligence. The last group received no information about the source of the message. This last group mimics how we typically receive emails in real life.
After reading the email, participants rated their social impression of the sender based on 10 personal characteristics. These characteristics include friendliness, honesty, trustworthiness, and trustworthiness. The researchers found that participants who knew that artificial intelligence was used to create the message rated the sender more negatively.
This finding confirms that explicitly disclosing the use of artificial intelligence damages an individual’s social reputation. The researchers also analyzed the words participants used to describe their first impressions of the sender. When the artificial intelligence was exposed, participants used fewer positive words and more negative words to describe the sender.
However, even when participants received no information about how the message was created, they rated the sender as positively as they would if they knew a human had written the message. The scientists noted that participants in this group did not show natural suspicion. Even in the uncertain group, where computer-assisted possibilities were emphasized, participants formed impressions that were much closer to the human-written group than to the artificial intelligence group.
“In these everyday interactions, people are very reluctant to receive AI-generated messages from other people,” Molnar told PsyPost. “For example, AI-generated apologies, no matter how sophisticated, are undesirable because they sound inauthentic and empty. Outsourcing highly personal communications to an AI can feel like a betrayal and even show disrespect.”
“However, this ‘AI penalty’ appears to apply only when someone knows or strongly suspects that an AI was used to write the message. What our research shows is that in the absence of explicit disclosure (e.g., a label indicating the use of an AI), people typically do not suspect AI in everyday situations and treat these messages as if they were written entirely by humans.”
The researchers conducted a second experiment seven months later to see if increased public awareness of these text-generating programs would increase natural skepticism. They collected a new sample of 654 adults in the United States. This time, we’ve updated the scenario to include a more diverse range of communication styles. New scenarios included social media posts about summer internships, text messages apologizing for canceled dinners, and detailed online dating profiles.
In this second experiment, the scientists asked participants to estimate how much time and mental effort the sender spent on the message. The researchers also asked how accurately the texts reflected the sender’s true feelings. Participants who were told that the text was generated by a computer program rated it lower on all three measures.
For the group that received no information about the sender of the message, we assumed that participants expended the same amount of mental effort as if the sender had been identified as a human writer. The researchers found that a lack of mental effort and reflex accuracy fully explained why participants penalized artificial intelligence users. The results of the second experiment perfectly replicated the results of the first study, showing that people remain blissfully ignorant about the use of artificial intelligence.
“What surprised us most was that people who were heavy users of generated AI themselves (frequently sent AI-generated or AI-edited messages) were less likely to suspect that others were using AI,” Molnar said. “We expected people to become more skeptical as they had more experience with these tools, but that hasn’t been the case. In other words, getting used to AI doesn’t automatically make you more suspicious in your everyday interactions.”
“This finding is important because it suggests that people can outsource their writing to AI with relatively little risk of detection or suspicion. This creates an uneven playing field. Those who are unwilling or unable to use AI are at a disadvantage. Heavy users, on the other hand, can appear clearer, more sophisticated, and more effective without incurring negative perceptions unless they admit they have used AI.And why would they do so?
When discussing the findings, the scientists highlighted potential misconceptions about what participants were actually assessing. Molnar explained that the study was designed to measure how people judge the author of a message, rather than how they judge the quality or effectiveness of the message itself. The focus was solely on the social impressions formed about the person on the other side of the screen.
This study also has some limitations that provide avenues for future research. Because the experiment was based on a hypothetical scenario, it is possible that participants would react differently in a real-life situation with real stakes. The researchers also tested using artificial intelligence completely, rather than in parts, by simply editing a few sentences using a program.
Because this study focuses on one-way communication, it is unclear how people react during live back-and-forth conversations. Additionally, this study only included participants from the United States. Researchers are particularly interested in exploring what specific situations in everyday life trigger suspicion.
“Our next step is to understand what triggers wariness and suspicion. What flips the switch between everyday communication and situations like academia, where people are more aware of the potential uses of AI? Our current research already suggests that it’s not just a matter of exposure and familiarity with these tools, as even heavy users of AI are less likely to be suspicious of others,” Molnar said.
“So we’re currently testing alternative explanations, such as whether high-stakes situations (grades, recruitment, evaluations) definitely increase vigilance, or whether people become more skeptical only after a negative personal experience that teaches them to be careful about using AI. We also want to collect data in other countries (the current experiment was conducted in the United States) to see if there is a difference between skepticism and vigilance.”
The study, “Blissful (A) Ignorance: Despite the prevalence of AI in communication, people do not doubt its use in real-world situations” was authored by Jiaqi Zhu and Andras Molnar.

