There is a good chance that in the near future, people will have to describe their symptoms to an AI before seeing a doctor. The AI will then determine if you have an emergency or if you can wait for treatment and schedule your appointment accordingly.
Fortunately, we are not there yet, but digitization is rapidly progressing in the medical field. The role of AI chatbots and digital symptom checkers is becoming increasingly important, becoming increasingly important as the first point of contact for so-called “self-triage”, or the initial assessment of the urgency of treatment by patients themselves.
However, while the technical capabilities of these systems are constantly improving, another element of how humans and machines communicate is becoming a focus of research. This is an important topic. Because even the best technology, especially in medical diagnostics, relies on accurate information that users do not always fully provide.
Human resistance limits AI potential
This is the central finding of a study now published in the journal natural health. The study was led by Professor Wilfried Kunde, holder of the Third Chair in Psychology at the University of Würzburg, and Moritz Reiss, a researcher from the same department. It involved scientists from the Charité University of Berlin and the University of Cambridge, as well as Berlin’s Helios Clinic Emil von Behring and the Vivantes Clinic Neukölln.
“500 study participants were tasked with writing simulated symptom reports for two common symptoms: unusual headaches and flu-like symptoms,” lead author Moritz Reis explains the study design. They were led to believe that their reports would be read by an AI chatbot or a human doctor. The aim was to examine the quality of these reports in terms of their suitability for medical emergency assessment.
The reduction in quality is evident in the reduced level of detail
Key finding: When participants believed they were communicating with artificial intelligence, the suitability of the explanation for an initial medical evaluation was significantly reduced compared to an assumed interaction with a medical professional. This effect was also observed among participants who were actually experiencing the relevant symptoms at the time of the study.
This reduction in quality is directly reflected in the level of detail in the report. Descriptions provided to medical professionals averaged 255.6 characters, while descriptions provided to chatbots averaged only 228.7 characters.
While a 28-letter difference may seem small, the researchers say this effect is relevant in practice and that even high-performance AI models can end up providing incorrect medical advice. After all, these models cannot provide accurate medical assessments if patients do not provide all important information. The success of a digital initial assessment depends more on the patient’s willingness to provide a detailed explanation than on their computational ability.
Psychological barriers: Concerns about “one-size-fits-all diagnosis”
But why are people so hesitant when it comes to machines? Perhaps the main reason is something known as “ignorance of uniqueness.” “Many people think that AI cannot pick up on the individual nuances of a personal situation, but simply matches standardized patterns,” explains Wilfried Kunde.
Furthermore, skepticism about the diagnostic capabilities of algorithms and concerns about privacy can lead people to provide omitted or ambiguous information. Moritz Reiss summarizes the human element as follows: “If we don’t trust a machine to understand our uniqueness, we may unconsciously withhold the information it needs to provide accurate assistance.” This psychological filter can have the effect that medically relevant details never even reach the system, thereby reducing the quality of diagnosis.
Improve your interaction with machines
In the researchers’ view, the findings clearly demonstrate that technological advances in AI alone are not enough. Therefore, they believe that there is a potential solution in the intelligent design of user interfaces.
To improve the quality of symptom reports, developers should program AI to provide concrete examples of high-quality explanations and proactively request missing details. Only if users are encouraged to provide detailed information can misdiagnoses be avoided and the burden on the healthcare system effectively reduced.
sauce:
Julius Maximilian University of Wurzburg, JMU
Reference magazines:
Reis, M. others. (2026). The quality of symptom reporting during human-chatbot interactions and human-physician interactions decreased. natural health. DOI: 10.1038/s44360-026-00116-y. https://www.nature.com/articles/s44360-026-00116-y

