When a person reaches across the table to pass salt, their brain is doing something far more complex than recognizing a request and performing an action. It’s based on a lifetime of physical experience, including social awareness of where their hands are in space, what a salt shaker feels like, and who asks why. In a split second, their bodies and brains function as one.
Today’s most advanced artificial intelligence systems lack such bodily mechanisms, and a new study from UCLA Health argues that this has significant implications for how these models work and how safe and reliable they are.
In a paper published in a magazine neuronAkira Kadambi, a postdoctoral fellow at UCLA Health, and colleagues propose that current AI systems are missing two key elements that humans take for granted. It is the body’s interaction with the physical world and its internal awareness of its own conditions, such as fatigue, uncertainty, and physiological needs. The researchers call this combined characteristic “internal embodiment,” and propose that building functional analogs of it in AI is one of the field’s most important and unexplored frontiers.
Currently, there is a focus on modeling the world based on external embodiments, such as external interactions with the world, but less attention is paid to internal dynamics, or what we call “internal embodiments.” For humans, the body serves as a kind of built-in safety system that helps us empirically regulate the world. When you lack self-confidence, are depleted of energy, or find your survival in jeopardy, your body registers it. Currently, there is no equivalent for AI systems. Whether they should or not, they can sound empirical, and this is a serious problem for a number of reasons, especially when these systems are deployed in consequential settings. ”
Akira Kadambi, postdoctoral fellow, Department of Psychiatry and Biobehavioral Sciences, David Geffen School of Medicine, UCLA, and first author of the paper
AI body gap
This paper focuses on multimodal large-scale language models, a class of technologies that power tools such as ChatGPT and Google’s Gemini. These systems can process and generate text, images, and videos that describe a glass of water, for example, but they can’t tell you what it feels like to be thirsty, the authors say.
The authors state that this distinction is not only philosophical, but also has a measurable impact on the behavior and behavior of these systems. In one of the paper’s illustrations, the researchers showed some major AI models with simple images. It’s a small number of dots placed to suggest a human figure in motion, a well-established perceptual test known as point-light display that even newborns can recognize as a human. Some models were unable to identify the person as a person, and one model described it as a constellation of stars. Rotating the same image by just 20 degrees caused even the best performing model to fail.
No human can fail this test. This is because human perception is anchored in a lifetime of physical experience through which humans have acted as agents in the world. Although the AI system is trained on a vast library of text and images, it has no physical experience and can perform pattern matching without that anchor, the study authors said.
Two types of “materialization”
This paper reveals a previously undefined feature of AI research. It defines “external embodiment” as the ability of a system to interact with the physical world, perceive its environment, plan actions, and respond to real-world feedback, which is a key focus in current multimodal AI models. However, the internal implementation is not implemented in these models. The authors define this as the continuous monitoring of one’s own internal state, which is the biological equivalent of knowing when one is tired, anxious, or troubled.
Humans constantly and automatically regulate these internal states using the body’s organs, hormones, and nervous system. Humans use that information not only to maintain physical health but also to shape attention, memory, emotions, and social behavior.
“In contrast, current AI systems do not have equivalent mechanisms to process input and produce output without persistent internal states that control behavior over time,” said Marco Iacoboni, Ph.D., a professor in the Department of Psychiatry and Biobehavioral Sciences at the David Geffen School of Medicine and senior author of the paper. “This is not only a performance limit, but also a safety limit. Without internal costs and constraints, AI systems have no intrinsic reason to avoid being overconfident, resisting manipulation, or acting consistently.” ”
what happens next
The authors say the paper is intended to guide future research as AI technology develops. The authors propose what they call a “dual enforcement framework,” a set of principles for building AI systems that model both their interactions with the external world and their internal states.
These internal state variables do not need to directly replicate human biology, but serve as persistent signals that shape the system’s output and track uncertainties, processing load, reliability, etc. that may limit its operation over time.
The authors also propose a new class of tests, or benchmarks, designed to measure the internal embodiments of a system. Existing AI benchmarks focus mostly on external performance, such as whether a system can move through space, identify objects and complete tasks. The UCLA researchers argue that the field needs assessments that investigate whether systems can monitor their own internal states, maintain stability when those states are perturbed, and behave prosocially in ways that emerge from shared internal representations rather than statistical imitation.
“What this research does is translate that insight directly into AI development,” Iacoboni said. “If we want an AI system that is not just superficially fluent but truly consistent with human behavior, we may need to give it vulnerabilities and put checks in place that act like internal self-regulation.”
sauce:
University of California, Los Angeles Health Sciences

