A new tool that not only identifies dietary and nutrition misinformation online, but also assesses the risk of content for potential harm, has been developed by a team of UCL researchers.
Unlike existing tools that make binary judgments about whether content is “true” or “false,” this first-of-its-kind tool addresses misinformation that is not obviously false but can be dangerously misleading, especially among vulnerable groups.
The tool’s developers confirmed that “true” or “false” ratings fail to capture the cumulative and contextual ways in which misleading health information can influence behavior and decision-making.
According to the WHO, health misinformation spread online poses a major public health threat. From restrictive diets and extreme fasting to the unsafe use of nutritional supplements (which is estimated to account for 20% of drug-induced liver injuries in the United States alone), misinformation can have dire and sometimes fatal consequences.
When it comes to diet and nutrition, misinformation often operates through a selective framework that masks potential health risks. Harmful and misleading content tends to fly under the radar of fact-checkers and escape significant scrutiny until a high-profile incident hits the headlines. ”
alex luani Lead author and developer, UCL Education Institute
a tool called Food and Nutrition Misinformation Risk Assessment Tool or Diet-MisRATis a rules-based content analysis model that adapts the World Health Organization (WHO) approach for assessing hazardous exposure to digital information environments in the physical environment. Treat online content as a “vehicle” and its misleading nature as a “risk factor” known to increase recipient sensitivity. Rank materials green, amber, or red based on a weighted false alarm risk score.
Within this framework, the risk of potential harm depends on the content, the context, and the degree to which the recipient is likely to be misunderstood. By expanding the definition of disinformation beyond factuality, this tool helps policymakers, digital platforms, and regulators to safeguard, prioritize responses, and take proportionate action when faced with harmful and misleading content.
Diet-MisRATThe results were tested and adjusted through five rounds of validation, including the judgment of approximately 60 experts in nutrition, dietetics, and public health. Testing has shown that this tool can provide reliable ratings. This process also identified core characteristics of misinformation (inaccuracies, dangerous omissions, manipulative framing) and indicators that increase the likelihood of risk (how and under what conditions the content is consumed, prominence).
For example, when evaluating content that includes a claim such as “It is safer to give children higher doses of vitamin A than the MMR vaccine,” the tool will classify this in the critical risk tier because it presents a false safety framework, ignores the risk of vitamin A overdose, undermines public health guidance, and increases the likelihood that harmful real-world decisions will be made.
Co-author Professor Anastasia Carrea (UCL Department of Medicine) said: “Including expertise is essential when assessing the risk of misinformation. Our tool has been tailored and validated based on feedback from around 60 experts in the field. This helps ensure that assessments of potential harm reflect good professional judgement.”
By isolating misleading characteristics and linking them to potential recipient outcomes, researchers were able to understand what makes content risky and what characteristics determine the magnitude of impact.
Examples of harm associated with the online spread of health misinformation include the diagnosis of cholesterol-induced skin lesions in a man who followed a carnivore diet in 2025, a trend that was disproportionately amplified by social media algorithms, particularly within “manosphere” communities.
Another example was a reported case in which a person was hospitalized for several weeks after following incorrect AI-generated advice to replace sodium chloride (salt) with sodium bromide. Sodium bromide has no role in the diet and is toxic if taken regularly over a long period of time. Online misinformation is also linked to decisions to abandon life-saving cancer treatments in favor of unproven alternative diets.
This study contributes to the ongoing debate about how digital platforms, public health authorities, and policy makers should respond to the growing influence of misleading health advice online, particularly on social media, search summaries, and generated AI.
“In public health, we assess exposure to risk factors, and we believe that misleading health information should be treated the same way. Some misinformation can result in serious harm, so mitigation strategies need to be proportionate to the level of risk. The more severe the potential harm, the stronger the response should be,” Luani said.
“When an AI chatbot speaks with confidence, users may think their advice is safe. If we can properly measure how misleading advice is and how much harm it may cause, we can build strong safeguards into our models and AI agents before they are deployed, rather than reacting after the harm has occurred.”
Co-author Professor Michael Rice (UCL Institute of Education) said: “By detailing the typical patterns that distort dietary, nutritional or supplement information, this tool’s risk assessment criteria can be taught and applied in education and professional training. This enables learners to understand not just whether something is wrong, but how and why it can cloud their judgement, and to recognize and challenge problems.”
sauce:
university college london
Reference magazines:
Luani, A. Others. (2026). Development and validation of a tool to detect the risk of misinformation in diet, nutrition, and health content (Diet-MisRAT). Scientific report. DOI: 10.1038/s41598-026-40534-2. https://www.nature.com/articles/s41598-026-40534-2

