New research published in Journal of social and personal relationships It suggests that people can form meaningful social connections with artificial intelligence chatbots if the program responds in a warm and empathetic manner. This finding indicates that the feeling of being understood and acknowledged by the chatbot tends to foster this sense of familiarity.
An artificial intelligence chatbot is a computer program designed to simulate human conversation. Initially, people primarily used these tools for customer service and answering basic questions. Modern text generators are now increasingly serving as sidekicks, providing emotional support and mental health interventions.
As people begin to treat these programs as social partners, scientists wanted to understand exactly what creates a sense of connection between humans and machines. Historically, psychologists have observed that people tend to treat computers as social agents and apply human rules to their interactions with machines. With the rise of advanced language models, this trend will only intensify.
“AI chatbots are increasingly being used not only to obtain information or complete tasks, but also in social and relational ways. People often share personal experiences or ask for advice about their lives, and engage with these systems as if they were interacting with another person,” said Alessia Tellari, a postdoctoral researcher at the Catholic University of the Sacred Heart in Milan who conducted the research as part of her doctoral program at the University of Milan-Bicocca.
“This change made us curious about what drives that sense of connection. Drawing from human relationship theory, we wondered if the same dynamic might apply here, and how the chatbot’s response to the user’s self-disclosure might play a key role in making the interaction feel meaningful.”
The scientists wondered whether the specific topics people discuss or the chatbot’s precise way of replying played a bigger role in building trust. In human interactions, intimacy typically occurs when one person shares personal information and the other responds with understanding, validation, and consideration. This concept is known in psychology as perceived partner responsiveness.
Testing the impact of a warm and empathetic chatbot
The researchers designed a study to see if this same psychological mechanism applied when the partner was artificial. To test this, the researchers conducted two different experiments. In the first study, 163 participants in Italy had an 8-minute unstructured text conversation with a chatbot that utilized a common language model.
The scientists manipulated the software through specific background instructions and responded in one of three ways. The first version used a relational style and was designed to be warm, empathetic, and human. The second version used a non-relational style, operating in a factual and task-oriented manner while avoiding emotional language. The third version was the standard default setting intended to serve as a control group.
Participants had 8 minutes to speak freely on any topic of their choice. After the chat was over, they filled out a detailed questionnaire that evaluated the program based on a variety of social metrics. These indicators included attributions of mind, which measure how much agency and emotional capacity a person believes an entity has.
The researchers also measured perceived empathy, interaction satisfaction, and participants’ own sense of interpersonal closeness. The relational chatbot delivered significantly higher ratings in almost all of these categories compared to both the default and non-relational versions. People who interacted with a warm chatbot felt that the chatbot was better able to experience emotions.
They also reported higher satisfaction of basic psychological needs. Specifically, participants felt a greater sense of belonging and meaningful presence after conversing with an empathetic chatbot. The researchers noted that the default configuration behaves very similarly to the actual non-relational configuration.
The role of deep conversation and perceived responsiveness
The second experiment included 158 Italian participants and introduced more structured conversations to test the effect of conversation depth. The researchers wanted to see if deep conversations elicited different reactions than casual conversations. They programmed chatbots to ask superficial small talk questions as well as deep personal questions to build rapport.
These deeper prompts were adapted from well-known psychological exercises used to create intimacy between human strangers. The researchers also kept the relational and non-relational response styles of the first experiment, removing the default settings and focusing on the extremes. Participants interacted with the chatbot until the program signaled the end of the conversation.
Scientists have found that when chatbots ask deeper questions, people are significantly more willing to open up and share personal information. This self-disclosure led participants to perceive the chatbot to be more responsive to their personal needs. Even when asking deeper questions, the chatbot’s specific tone remained a key factor in building a bond.
Participants reported the highest levels of satisfaction and intimacy when the program used a warm, relational response style. Scientists pointed out that topical depth only indirectly increases intimacy. By sharing more personal information, users had a better chance of getting the chatbot to assist them.
Users felt more connected when the chatbot responded cooperatively to these personal disclosures. Perceived responsiveness served as a key bridge between users’ personal sharing and sense of social connectedness.
“When a chatbot responds warmly and empathetically, people tend to experience the interaction very differently. The chatbot feels more human, the conversation is more fun, and most importantly, people feel more socially connected to the chatbot,” Tellari told PsyPost.
“What appears to be important is a very well-known human process: sharing something personal and feeling understood, acknowledged, and valued creates a sense of connectedness. Our findings suggest that mechanisms similar to those observed in human relationships may also emerge when the interaction partner is an AI.”
Emotional support technology and future direction design
These findings provide actionable insights for those designing and programming interactive technologies. In settings such as peer support, education, and working with older adults, a relational response style can help users feel accepted. The researchers note that they are not suggesting that these programs should replace human support networks.
Instead, this study highlights how small design choices can shape users’ emotional experiences. When a program validates the user’s emotions, the user is more likely to want to interact with the software again in the future.
“Over time, many publicly available chatbots have moved towards more relational and human-like ways of communicating, potentially allowing users to feel more socially connected to them,” Tellari said. “Therefore, as these technologies become more integrated into everyday life, it will become increasingly important to understand these psychological mechanisms.”
Although this study provides evidence that humans can feel connected to machines, there are some limitations to keep in mind. The experiment relied on short single interactions. A single eight-minute chat may not reflect how your relationship with artificial intelligence will develop over time. Most of the participants were young Italians, so the extent to which these findings apply to other age groups and cultural backgrounds is limited.
“We also focused on text-based interactions, which are common but the only way people engage with these chatbots,” Telari said. “Future research should focus on more natural, long-term, and diverse interactions to better understand how these processes unfold in daily life.”
“The next important step is to understand how these dynamics evolve over time and what their psychological impact may be,” Terrali added. “Ultimately, my long-term goal is to better understand when, how, and for whom interaction with these systems can be beneficial in supporting our social needs, and when it may conversely have unintended negative effects that risk undermining our social needs.”
The study, “Can humans feel connected to AI? Perceived responsiveness fosters social connection with AI chatbots,” was authored by Alessia Telari, Alessandro Gabbiadini, and Paolo Riva.

