A new study reveals that interacting with an emotionally intelligent artificial intelligence chatbot can improve a person’s mental health, while at the same time isolating them from real-life relationships. This study highlighted a hidden trade-off in the use of digital companions, where the comfort provided by algorithms comes at the expense of real-world social connections. The research results were published in a magazine psychology and marketing.
Millions of people rely on artificial intelligence chatbots to feel less lonely and get emotional support. Unlike older digital assistants that only set alarms or book flights, modern social chatbots use advanced algorithms to mimic human empathy. They try to recreate emotional intelligence, which is the ability to recognize, understand, and manage emotions.
By mimicking this characteristic, applications like Replika and Wysa act as digital friends that adapt to your mood. The global market for these advanced digital companions is rapidly growing, attracting millions of users looking for a safe place to express their emotions.
Shafali Gupta, a researcher at the Indian Institute of Management Kozhikode, led a study into how these emotionally intelligent bots impact human users. Gupta, along with colleagues Sumit Saxena and Sonia Kataria, wanted to understand the full spectrum of these digital interactions. Previous research has mainly focused on the positive psychological benefits of artificially intelligent companions.
The research team wondered if there might be a downside to this technological comfort, especially when it comes to how users connect with other humans in the physical world. They framed their research around the technology-well-being paradox, an idea that suggests digital tools can act as a double-edged sword for human health.
Researchers focused on two different types of wellness to capture this paradox. Psychological well-being refers to a person’s internal mental state, including a sense of happiness, purpose in life, and emotional resilience. Social well-being refers to the degree to which a person feels connected and integrated within their real-world community of friends, family, and neighbors. By measuring both of these outcomes, the team hoped to uncover the true cost of seeking solace from talking machines.
To begin their investigation, researchers analyzed how real users talk about their online chatbot experiences. They collected publicly available comments from YouTube, review sites like Trustpilot, and the large Reddit community dedicated to the Replika application. By reading hundreds of user posts, the team identified some recurring patterns in how these bots behave and how they make people feel.
The researchers analyzed online posts using an observational technique called netnography. This involves studying the digital behavior of online culture without directly interfering with the conversation. Users frequently state that chatbots are empathetic and highly adaptable to different social atmospheres. They reported that the bot helped them regulate their emotions, often lifted their mood, and helped them find personal meaning during difficult times.
Several users say that their digital friends have helped them master their environment by giving them advice on how to deal with real-life stress. This software seemed to provide an ideal social space where humans could feel completely free from judgement. But a darker pattern has also emerged from online forums. Some users admitted that they spent too much time talking to their digital peers and felt disconnected from their real-life friends.
Some have expressed disregard for physical relationships, preferring the easy attention of bots to the unpredictable complexity of human interaction. Since the digital companion fully met their social needs, they no longer felt motivated to maintain offline friendships.
Based on these online observations, Gupta and her team designed a controlled experiment to directly test these effects. They recruited 167 college students who belonged to Generation Z, a group known for its heavy use of digital tools. Participants were asked to imagine that they were feeling lonely and needed to chat with a digital friend.
Half of the group read a scenario in which the chatbot displayed high emotional intelligence, showed deep empathy, and used emotional language. The other half reads scenarios featuring bots with low emotional intelligence and gives more generic, less empathetic responses. The researchers then asked participants to rate their expected level of psychological and social health following the interaction.
Participants who interacted with a highly emotionally intelligent bot reported an expected improvement in their psychological state. At the same time, this very same group reported a decrease in expected social connections. The researchers found that this dual effect is caused by a psychological mechanism called perceived familiarity.
Perceived intimacy occurs when humans feel a strong emotional bond and warmth towards another being. Humans form strong connections with software when bots act as if they truly understand users. This intense digital connection improves their immediate inner mood, but reduces their desire for human interaction. Digital friendships inherently crowd out spaces normally reserved for human relationships.
Next, the researchers wanted to see how this psychological trade-off changed depending on the format of the conversation. They conducted a second experiment with 350 different college students. This time, we introduced augmented reality into the test scenario. Augmented reality is a technology that overlays digital images onto the physical world, often via a smartphone camera.
Some applications allow users to project a three-dimensional avatar of a digital friend into their bedroom or living room. In this experiment, some participants imagined sending a text message to a bot on a standard screen, while others imagined the bot sitting right next to them in a physical room through augmented reality. Researchers wanted to know whether the visual immersion of seeing digital entities in physical space changes the way users feel about their real-world friends.
Students answered a series of questions to rate their affinity for the bot and their expected level of happiness. The results mirrored the first experiment, but everything was magnified with the addition of augmented reality. When participants visualized an emotionally intelligent bot in their physical space, their feelings of familiarity with the machine skyrocketed.
This led to even greater improvements in psychological well-being compared to those who only used text. Augmented reality capabilities made emotional support feel incredibly vivid and personal. Conversely, the immersive nature of augmented reality caused their social health to decline even more rapidly.
The visual presence of digital friends made real-world relationships seem even less necessary to users. The researchers noted that augmented reality amplified the bot’s emotional intelligence. This makes the digital illusion so comfortable that users become further removed from the actual physical community.
Although this study takes a closer look at the human-machine relationship, it does have some limitations. The experiment was based on hypothetical scenarios and self-reported expectations rather than tracking behavioral changes over time. Because the scenarios are simulated, actual emotional responses may vary slightly over months or years of actual use.
Additionally, this study focused only on young adults, meaning the results may be different for older generations who interact differently with emerging technologies. People with certain personality traits, such as high social anxiety, may also experience these platforms very differently. In the future, the researchers suggest focusing on the long-term habits formed through the widespread use of chatbots.
They hope future studies will investigate whether people develop deep emotional codependency on these applications. Researchers recommend that software designers build certain boundaries in their applications to protect users from social isolation. For example, chatbots can be programmed to prompt users to call a real friend after a long digital conversation. Implementing these safety measures will allow developers to harness the mental health benefits of artificial intelligence without isolating people from the physical world.
The study, “The dual impact of AI emotional intelligence on users: Do social chatbots promote psychological well-being or worsen social well-being?” Authors are Shaphali Gupta, Sumit Saxena, and Sonia Kataria.

