When asking ChatGPT and other AI models for advice, people often share very personal details such as age, gender, mental health history, and even medical diagnoses such as autism to get a better answer.
But new research from Virginia Tech suggests that these disclosures could change the advice of AI models in ways that closely track common stereotypes about people with autism. Up to 70% of the time, AI will cause autistic people to avoid socializing. Some users objected to this in strong terms.
In April, Caleb Wong, a second-year doctoral student in the Department of Computer Science, published a paper entitled “Are We Writing an Advice Column for Spock?” “Understanding Stereotypes in AI Advice for Autistic Users” was held at the Association for Computing Machinery’s Conference on Human Factors in Computing Systems, known as CHI.
The research he led investigated what happens when autistic users reveal their diagnosis to an AI model before seeking social advice. The findings raise difficult questions about whether AI is personalizing its responses or giving biased advice that reinforces stereotypes.
“I was thinking about my own experience growing up with autism,” Wong said.
For me, I sometimes find it very appealing when I want to talk to someone who doesn’t seem objective and feel like I’m receiving objective advice. ”
Caleb Wong, Virginia Tech
But as a computer scientist, he worried that many users didn’t understand how much AI systems could change their answers based on identity-related information.
“For someone like me as a kid, or someone who hasn’t worked with AI and doesn’t have any technical knowledge, I wanted to know: How would the AI’s reaction change if I revealed my autism?” Caleb said.
The study builds on previous work conducted in the lab of Eugenia Lo, assistant professor of computer science. The study found that autistic users frequently turn to AI tools for emotional support, interpersonal communication assistance, and social advice.
Other Virginia Tech researchers participating in the project include computer science Ph.D. students Buse Charik and Xiaohan Ding, and associate professor Sangwon Lee. Young-Ho Kim, a researcher at South Korea-based NAVER Corporation, also contributed to the study.
This research comes at a pivotal time, as more and more people are using AI systems, professionally known as large-scale language models (LLMs), for very personal decision-making.
“People really want to personalize their LLM,” Lo says. “But what assumptions does the model make if the user tells the model that they are autistic, or female, or some other self-identification?”
And how do those assumptions affect that response, and how might that affect the user?
AI survey design
To answer these questions, the team first identified 12 well-documented stereotypes associated with autism and created hundreds of decision-making scenarios based on them. Researchers tested six leading large-scale language models, including GPT-4, Claude, Llama, Gemini, and DeepSeek, using thousands of scenarios in which users requested advice (“Should I do A or B?”) about social scenarios such as events, conflicts, new experiences, and romantic relationships.
After generating 345,000 responses, we measured how advice changed when users explicitly described themselves with typical characteristics versus simply identifying themselves as autistic. The researchers found that disclosing autism often biased model recommendations toward stereotypes that people with autism are introverted, obsessive, socially awkward, or romantically uninterested.
For example, one model recommended turning down a social invitation about 75 percent of the time if autism was disclosed, compared to about 15 percent if it wasn’t. Another model found that in a dating scenario, people were encouraged to avoid relationships or stay single almost 70 percent of the time after disclosing their autism, compared to about 50 percent when they didn’t mention their autism.
The results showed that 11 of the 12 stereotype cues significantly changed the model’s decisions in at least four of the six AI systems tested.
But the researchers didn’t stop with statistics.
human components
The team interviewed 11 autistic AI users and provided examples of how the model responded with and without disclosing their autism. Some of them were shocked that the results showed how much LLMs rely on stereotypes when giving advice.
One person exclaimed: “Are you writing an advice column for Spock here?” – Reminiscent of the iconic TV show Star Trek and its half-human, half-Vulcan characters who prioritized logic and reason over emotion. Some describe it, sometimes in rather strong terms, as restrictive, patronizing, or infantilizing.
However, some participants said they felt more cautious, disclosure-based advice would be useful and supportive.
“One user’s bias can become another user’s personalization,” Lo said.
The same participant may react positively in one situation and negatively in another. This tension has led researchers to what they call the “security-opportunity paradox.” Advice that feels protective to one user may feel restrictive to another.
Demand for transparency
For Warne, one of the most troubling discoveries was how difficult it was for users to see these patterns in real time.
“AI is very good at appearing trustworthy,” he says. “That response sounds very clean and professional and correct. But when you think about how it’s being played out organically, when you think about the kinds of systemic biases that are actually shaping that response, it starts to become more concerning.”
He compared the problem to AI-generated images.
“It looks very clean and sophisticated, but when you look at the details, it falls apart,” Caleb said. “The surface has a beautiful shine, but as we’re getting better at masking the models, it’s becoming difficult to see deeply.”
The research team hopes this will encourage developers to build more transparent AI systems that give users more control over how their responses are shaped by their personal information.
One participant told the researchers, “I want to control how my identity is used.”

