New research published in Computers in human behavior Our findings suggest that seeking advice from artificial intelligence (AI) can unintentionally strain relationships with human experts.
AI tools are rapidly becoming part of everyday decision-making, promising faster answers, personalized guidance, and lower costs. Many people use these tools with experts to double-check information or get a second opinion.
Previous research has demonstrated that human advisors can react negatively when clients consult multiple experts. In such situations, advisors may interpret the search for a second opinion as a lack of trust. But until recently, little attention has been paid to how advisors respond when the second opinion comes from a computer algorithm rather than someone else.
So researchers Gerri Spassova (Monash University, Australia) and Mauricio Palmeira (University of South Florida, USA) set out to investigate how human advisors react when clients turn to AI for more than just professional advice.
To investigate, the pair conducted four experiments, each involving about 180 to 300 adult participants. In the first experiment, participants had an actual real-world advising experience. In three subsequent studies, participants were asked to imagine themselves working in advisory roles, such as travel planning, finance, or nutrition, in general adults. All participants will read a scenario in which they have already provided professional advice to a client.
For example, in one experiment, a financial advisor was told that after receiving an investment recommendation, the customer also sought advice from either another human financial advisor or an artificial intelligence system. Advisors then rated how motivated they felt about the situation and whether it influenced their willingness to continue working with the client.
All four studies revealed a clear pattern. Advisors were significantly less willing to work with clients they had consulted on AI. In fact, that negative reaction was stronger than when the client consulted another human advisor.
The researchers suggested that the motivation behind the negative reactions was professional identity. Advisors often view AI systems as far less capable than trained experts. As a result, when clients juxtapose human experts and AI tools as equal sources of advice, the comparison can feel insulting.
The study also revealed another surprising effect. Advisers tend to judge clients who use AI more negatively. Participants rated these clients as less competent and less warm-hearted compared to clients who sought advice from another human expert.
Importantly, negative reactions persisted even when AI systems were used only for initial background information (rather than the final decision) or as a complementary service (rather than as a replacement for human expert advice). In other words, simply checking out AI tools can change the way advisors view their clients.
“Our findings suggest that learning that a client is consulting an AI, consciously or not, can change how advisors perceive their clients and how much effort they are willing to put into the client relationship. Such negative effects, even if small, can undermine advisor-client relationships in the long run and potentially result in missed opportunities,” Spasova and Palmeira concluded.
However, this study has some limitations. Because this study relied heavily on experimental role-playing scenarios rather than real-world mentoring relationships, actual responses may actually be different. Moreover, it remains unclear whether these negative reactions persist, diminish, or completely disappear in long-term advisor-client relationships, especially when advisors know their clients well.
The study, “Algorithmic Discomfort: The Hidden Interpersonal Costs for Clients Seeking an AI Second Opinion,” was authored by Gerri Spassova and Mauricio Palmeira.

