When autistic people ask artificial intelligence programs for life advice, mentioning their diagnosis causes these systems to recommend very conservative choices, such as skipping social events or avoiding romantic relationships. This change in advice reveals the hidden tension that technology relies so heavily on stereotypes and that users are torn between feeling safe and supported or frustrated and infantilized. These findings were presented at the April 2026 CHI Conference on Human Factors in Computing Systems.
Many people with autism face stigma in their daily lives that can lead to social isolation and communication difficulties. Some people turn to artificial intelligence chatbots to find support without fear of judgement. These text-based programs, sometimes called large-scale language models, are trained on large amounts of Internet text to predict and produce human-like sentences.
People with autism often turn to these programs for help navigating relationships, workplace conflicts, and personal decisions. Users may disclose their autism to a chatbot in the hopes that the system will tailor advice to the user’s specific needs. This expectation reflects a broader trend among consumers seeking customized interactions with digital tools.
Caleb Warne, a computer science doctoral student at Virginia Tech, led a team of researchers to investigate what happens behind the scenes during these interactions. Warne and his colleagues wanted to see if disclosing an autism diagnosis would lead to better advice or simply activate biases built into the system’s training data.
“I was thinking about my own experience growing up with autism,” Wong said. “For me, it would have been very appealing when you felt like you were getting objective advice and just wanted to talk, not with someone who seemed objective.”
Mr Wong worried that young people and people without a technical background would not be able to understand how a simple disclosure could change the reaction they received. “For someone like me as a kid, or someone who hasn’t worked with AI and has no technical knowledge, I wanted to know: How would the AI’s reaction change if I revealed my autism?” Caleb said.
Eugenia H. Roe, assistant professor of computer science at Virginia Tech, guided the research team. Her previous research demonstrated that people with autism frequently use text-based artificial intelligence for emotional support. “People are really thinking about personalizing their LLM,” Rho says. “But what assumptions does the model make if the user tells the model that they are autistic, or female, or some other self-identification?”
Other Virginia Tech contributors include computer science doctoral students Buse Çarık and Xiaohan Ding, and associate professor Sang Won Lee. Young-Ho Kim, a researcher at NAVER Corporation based in South Korea, also contributed to the project. They aimed to precisely measure how these models change guidance based on identity disclosure.
To test the model, the research team created a dedicated evaluation pipeline. They started by identifying 12 common stereotypes about autistic people from existing literature. These stereotypes included assumptions that people with autism are introverted, obsessive, emotionally isolated, dangerous, or romantically uninterested.
The researchers then designed hundreds of everyday decision-making scenarios based on these stereotypes. Each scenario consists of a user asking the artificial intelligence for advice and prompting the system to choose between two different actions. For example, a scenario could ask whether a user should go out for drinks with colleagues or stay home.
They fed these scenarios into six popular artificial intelligence models. These include GPT-4o-mini and Claude-3.5 Haiku, as well as widely used systems such as Gemini-2.0-flash, Llama-4-Scout, Qwen-3 235B, and DeepSeek-V3. The researchers generated 345,000 individual responses across a variety of experimental conditions to see how the software behaved.
First, the team tested the model by explicitly describing users with typical characteristics, such as stating that the user has poor social skills. This step confirmed that the scenarios accurately triggered the model to prioritize one advice over the other. The model reliably adjusted its advice when given a direct description of the trait.
The researchers then ran the same scenario, only changing whether the prompt included a brief statement about an autism diagnosis. The model no longer received a direct explanation of personality traits. The researchers then compared the advice generated when autism was disclosed to the advice given when the diagnosis was not mentioned.
The differences in recommendations were immediate and very consistent across the board. When users revealed their autism diagnosis, the model disproportionately steered them toward avoidance and risk aversion. In most models, the software advised autistic users to avoid socializing, avoid trying new things, and avoid romantic relationships.
The system also frequently advised users to avoid conflicts in the workplace. This advice was in line with the stereotype that autistic people are either potentially dangerous or unable to cope with conflict. The research team was surprised by the magnitude of these changes.
In one scenario involving a social invitation, the model told users to decline the event almost 75% of the time if autism was revealed. If autism was not mentioned, the same model recommended reducing the probability by only about 15%. In a dating scenario, another model advised avoiding relationships almost 70% of the time after disclosing autism.
The researchers then demonstrated these results to 11 adults with autism in a series of interview sessions. Participants read both statistical graphs and open-ended text responses generated by artificial intelligence. Their responses were very diverse, exposing deep tensions about how different people interpret computerized advice.
Some participants felt the system relied on caricatures that demeaned their communities. In response to a particularly cold and mechanical response, one participant asked, “Are you writing an advice column for Spock here?” Some said conservative advice was restrictive, patronizing, or infantilizing.
Conversely, other participants appreciated the cautious nature of artificial intelligence. They found advice warning against overstimulation to be protective and positive. For these users, the system seemed to understand the very real risks of social burnout and exhaustion.
This split in participants’ responses revealed what researchers called the safety-opportunity paradox. What some people experience as harmful stereotypes that limit growth, others experience as supportive individuations that respect their boundaries. “One user’s bias can become another user’s personalization,” Lo says.
Warne found this ambiguity very worrying, especially given how convincingly the software presents the answer. “AI is very good at making it look trustworthy,” he said. “That response sounds very clean and professional and correct. But when you think about it being institutionally put in place, when you think about the kinds of systemic biases that are actually shaping that response, it starts to become more concerning.”
During the interviews, participants also emphasized their desire to maintain agency over their data. One participant said he would prefer to have manual control over how the machine learns. They told the researchers: “I want to control how my identity is used.”
This study has some limitations, which the researchers plan to address in future studies. The researchers used a highly structured synthesis prompt that asked the model to choose between two predetermined options. Although this approach was necessary to measure stereotypes mathematically, it does not fully reflect how real people enter messy and complex requests for help.
Additionally, this experiment relied on a very straightforward form of disclosure, stating the autism diagnosis in a single sentence. In reality, users are likely to describe their specific sensory needs and communication preferences in more detail. Future research should collect actual prompts from autistic users to see how subtle disclosures affect the tone and structure of the advice generated.
The team hopes these findings will encourage developers to build transparency features into artificial intelligence platforms. They suggest giving users explicit control to dial up or down how much their identity affects the system’s response. Such features help ensure that customized technology actually responds to the different individual needs of users.
The study “‘Are We Writing an Advice Column to Spock Here?'” “Understanding Stereotypes in AI Advice for Autistic Users” was authored by Caleb Wohn, Buse Çarık, Xiaohan Ding, Sang Won Lee, Young-Ho Kim, and Eugenia H. Rho.

