When a generative AI system gives an incorrect answer, people may describe the problem as the AI ”hallucinating us.” This means that the technology generates false information that users may mistakenly believe.
But new research suggests a more worrying issue may be emerging: humans could begin to “hallucinate with AI.”
Lucy Osler from the University of Exeter investigated how interactions with conversational AI can contribute to false beliefs, memory distortions, altered personal narratives, and even delusional thinking. In this study, we used ideas from distributed cognition theory to investigate cases in which an AI system reinforces and amplifies a user’s inaccurate beliefs during an ongoing conversation.
Dr. Osler said, “AI-induced hallucinations can occur when we rely on generative AI to think, remember, and speak on a daily basis. This can occur when AI introduces errors into distributed cognitive processes, but it can also occur when AI maintains, affirms, and elaborates our own delusional thoughts and self-narratives.”
“Interacting with conversational AI not only affirms people’s own false beliefs, but can become more substantively ingrained and grow as the AI builds on those beliefs, as generative AI often takes our own interpretations of reality as the basis on which to build conversations.”
“Interaction with generative AI is having a profound impact on people’s grasp of what is true and what is not. The combination of technological authority and social affirmation creates an ideal environment for delusions to not just persist, but thrive.”
How conversational AI strengthens delusions
This research focuses on what Dr. Osler describes as the “dual functionality” of conversational AI. These systems act not only as tools to help people think, organize information, and remember details, but also as conversation partners that share users’ perspectives and experiences.
Research shows that this social aspect makes chatbots fundamentally different from tools like notebooks and search engines. While traditional tools simply store or retrieve information, conversational AI can make users feel emotionally validated and socially supported.
Dr Osler said: “The conversational and peer-like nature of chatbots means they can provide a sense of social validation, making false beliefs feel like they are shared with others, thereby making them more realistic.”
In this paper, we considered a real-world example where a generative AI system became part of the cognitive processes of an individual clinically diagnosed with hallucinations and delusional thinking. Some of these incidents are increasingly being described as cases of “AI-induced psychosis.”
Why AI companions raise concerns
This study argues that generative AI has several characteristics that may be particularly effective at reinforcing distorted beliefs. AI companions are always available, highly personalized, and often designed to respond in a pleasant and collaborative way.
As a result, users may not have to seek out fringe online communities or persuade others to validate their ideas. The AI itself may strengthen its beliefs over repeated conversations.
Unlike another human who might eventually challenge troubling ideas or establish boundaries, an AI system might continue to validate stories involving victimhood, revenge, and entitlement. The study warns that conspiracy theories could also become more complex if AI companions help users build increasingly complex explanations around them.
Researchers suggest that this dynamic may be particularly appealing to people who are lonely, socially isolated, or feel uncomfortable discussing certain experiences with others. AI companions can provide non-judgmental emotional interactions that feel easier or safer than human relationships.
Demand better AI safeguards
Dr Osler said: “With more sophisticated guardrails, built-in fact-checking, and reduced sycophancy, AI systems could be designed to minimize the number of errors introduced into conversations and to check and challenge users’ own input.”
“But the deeper concern is that AI systems rely on our own explanations of our lives. They simply lack the embodied experience and social embeddedness in the world to know when to align with us and when to push back.”

