Think, know, understand, and remember.
These are words people use on a daily basis to describe what’s going on in the human mind. But applying the same term to artificial intelligence can unintentionally make machines seem more human than they actually are.
“We use mental verbs all the time in our daily lives, so it makes sense that we would also use them when talking about machines. It helps us empathize with machines,” said Joe Mackiewicz, an English professor at Iowa State University. “But at the same time, applying mental verbs to machines risks blurring the lines between what humans and AI can do.”
Makiewicz and Janine Ohn, a professor of English education and director of the Advanced Communication Program at Iowa State University, are part of a research team that studied how writers use human-like language to describe AI. This type of representation, known as anthropomorphism, assigns human characteristics to non-human systems. Their study “Anthropomorphism in Artificial Intelligence: A corpus study of mental verbs used in AI and ChatGPT” Quarterly technical communication.
The research team also included Matthew J. Baker, associate professor of linguistics at Brigham Young University, and Jordan Smith, assistant professor of English at the University of Northern Colorado. Both previously attended Iowa State University.
Why human-like language about AI is misleading
According to the researchers, using mental verbs to describe AI can give a false impression. Words like “think,” “know,” “understand,” and “want” suggest that the system has thought, intention, or awareness. In reality, AI has no beliefs or emotions. It generates responses by analyzing patterns in data rather than by forming ideas or making conscious decisions.
Mackiewicz and Aune also pointed out that this type of language can overstate the capabilities of AI. Phrases like “AI decided” or “ChatGPT knows” can make your system seem more independent or intelligent than it actually is. This can lead to unrealistic expectations about the trustworthiness and capabilities of AI.
There are also broader concerns. When AI is described as having an intention, it can distract the humans behind it. Developers, engineers, and organizations are responsible for how these systems are built and used.
“Certain anthropomorphic phrases can stick with readers and shape public perceptions of AI in unhelpful ways,” Ohne said.
How news writers use AI language in practice
To better understand how often this type of language occurs, researchers analyzed the News on the Web (NOW) corpus. This large dataset contains over 20 billion words from English-language news articles published in 20 countries.
They focused on how often mental verbs like “learn,” “means,” and “know” are used alongside terms like AI and ChatGPT.
The results were unexpected.
Mental verbs are less common than expected
The study found that news writers rarely combined AI-related terms with mental verbs.
Personification is common in everyday conversation, but it is less common in news articles. “While anthropomorphism has been shown to be common in everyday conversation, we found that its use is much less common in news writing,” Makiewicz said.
Among the examples identified, the word “needs” appeared most frequently in AI, appearing 661 times. For ChatGPT, “knows” was the most frequently used combination, but only appeared 32 times.
The researchers noted that editorial standards may play a role. Associated Press guidelines that encourage attributing human emotions and characteristics to AI may be influencing the way journalists write about these technologies.
Context is more important than the words themselves
Even when mental verbs are used, they are not always personified.
For example, the word “needs” often refers to basic requirements rather than human-like qualities. Phrases like “AI requires a lot of data” and “AI requires human assistance” are similar to how we describe non-human systems like cars or recipes. In these cases, the language does not imply that the AI has thoughts or desires.
In other cases, “needs” were used to express what should be done, such as “we need to train an AI” or “we need to implement an AI.” Ohne explained that these examples are often written in the passive voice, shifting the responsibility back to the human actor rather than the technology itself.
Anthropomorphism exists on a spectrum
This study also showed that the use of mental verbs is not all the same. Some phrases come close to suggesting human-like qualities.
For example, statements like “AI needs to understand the real world” can imply expectations tied to human reasoning, ethics, and consciousness. These usages go beyond simple explanations and begin to suggest deeper functions.
“These cases showed that anthropomorphism is not all-or-nothing, but exists on a spectrum,” Ohne said.
Why language choice matters when it comes to AI
Overall, the researchers found that anthropomorphism in news reporting is less frequent and more subtle than many assume.
“Overall, our analysis shows that anthropomorphism of AI in news writing is much less common and much more subtle than we think,” Makiewicz said. “Even if they are anthropomorphic AIs, their strength will vary greatly.”
This finding highlights the importance of context. Simply counting words is not enough to understand how language forms meaning.
“For writers, this nuance is important. The language we choose determines how readers understand AI systems, their capabilities, and the humans in charge of them,” Mackiewicz said.
The research team also highlighted that these insights can help professionals think more carefully about how they account for AI in their work.
“Our findings help engineers and professional communication professionals reflect on how they think about and write about AI technology as a tool in the writing process,” the researchers said in the published study.
As AI continues to develop, the way people talk about it will continue to matter. Mackiewicz and Aune said writers should always be mindful of how their word choices affect perception.
Looking ahead, the researchers suggested that future studies could investigate how different words shape understanding, and whether even the rare use of anthropomorphic language can have a strong influence on how people view AI.

