The direction of a computer-generated character’s gaze determines whether its facial expressions look like genuine emotional responses to a human observer. A fake smile or angry glare will look more real when you make direct eye contact, and a sad digital face will look more real when you look down. These findings were recently published in the journal Cognition and Emotion.
Digital characters frequently appear in online therapy programs, video games, customer service applications, virtual companion software, and more. To be successful in these roles, virtual humans must develop trusting relationships with the users they interact with. This requires digital characters to display emotional states that human users interpret as authentic.
Virtual people have no real emotions and therefore rely entirely on visual cues to simulate authentic mental states. Previous research has investigated how physical characteristics shape how people interpret emotional expressions. To determine whether a smile truly represents happiness, people often look at the wrinkles in the skin around the eyes.
Observers commonly interpret these eye wrinkles as a sign of true joy, even though humans can completely flex them when not happy. Because visual cues have a huge impact on human perception, the researchers wanted to know whether the direction of an avatar’s eyes could also determine how authentic an emotion appears to the viewer.
A framework of psychological research known as the shared signal hypothesis proposes a link between eye movements and emotional social intentions. Emotions that cause interaction or indicate conflict, such as happiness or anger, represent an intention to approach. According to theory, the emotions of these approaches are best combined with direct eye contact.
Conversely, emotions that indicate social withdrawal or a desire to escape, such as sadness or fear, indicate avoidance intentions. The shared signal hypothesis posits that these avoidance emotions should appear most natural when the eyes are looking away from the observer.
Julia C. Heil, a researcher at the University of Western Australia, led a team that tested these assumptions using digital human models. The researchers focused entirely on computer-generated faces, rather than photos of real people. Software cannot experience emotions, so by using a digital model, the team was able to completely separate the perception of an emotion from the actual emotion.
Researchers are also now able to precisely and regularly control and adjust eye position without the natural physical fluctuations that occur when real humans try to pose. To begin the project, the research team used professional animation software to generate 10 highly realistic virtual adults. This underlying technology is widely used to create sophisticated and realistic characters in modern blockbuster video games and animated films.
Human experts adjusted digital muscle sliders that correspond to different parts of the human face. The software maps a digital face to a real human’s musculature, allowing experts to manipulate avatars by targeting specific facial muscle groups. Rather than setting all digital muscles to 50%, the designers tweaked the tension in the digital cheeks, eyebrows, and jawline until the model closely mimicked a reference photo of real human emotional expression.
They modified these areas to create digital faces that expressed anger, fear, happiness, and sadness. The team presented a large batch of these generated faces to a group of participants and evaluated how well the features conveyed the target’s emotions. The researchers selected a final set of digital humans that reliably conveyed the intended emotion.
They intentionally avoided choosing uniformly perfect-looking representations, leaving room in the data for perceived trustworthiness to rise and fall depending on eye position. In their first major experiment, Haile and colleagues recruited 150 adults and asked them to observe faces on a computer screen. Participants rated how believable each representation was using a numerical scale.
The researchers changed the angry and frightened avatar’s eyes to look either straight ahead or to the side at five increasingly wide angles. For happy and sad avatars, the eyes either meet directly with the viewer or move down in various increments. Before the rating task began, participants received clear instructions on how to rate the avatars.
The researchers asked them to distinguish between the sheer intensity of an emotion and its authenticity. For example, a subtle grimace can be completely real, while a highly exaggerated crying face can look completely posed or fake. Participants were asked to score based solely on their perception of their true internal state, regardless of whether the expression was mild or extreme.
During the evaluation process, the researchers took steps to simulate the physical experience of making eye contact. To standardize the viewing experience, participants rested their heads on a chin support, keeping their eyes perfectly level with the digital face. Before each face appeared, a crosshair flashed between the avatar’s eye positions on the screen.
This ensured that the participant’s direct line of sight was precisely aligned with the virtual human, creating a realistic simulation of mutual eye contact before the avatar’s gaze shifted to the side or down. Observers also rated the intensity of facial expressions on a separate scale. Stronger, more vibrant expressions tend to appear more authentic to observers.
The researchers used a statistical model to separate strength and reliability characteristics in their analysis. Doing so helped the researchers isolate specific independent effects of gaze direction. For angry and happy avatars, facial expressions appeared most authentic when the digital character maintained direct eye contact with the viewer.
As the avatar looked away from the center of the screen, the illusion of genuine emotion faded. The happy look on my face became less and less believable as I looked down with each step. The emotion of sadness behaves completely differently.
As the avatar looked further down, the sad digital face became more and more believable. The highest ratings of reliability occurred at the sharpest downward angle. However, the fear did not follow the expected psychological pattern.
Changing the gaze of a frightened face to the side did not result in a statistically significant change in how realistic the fear appeared to the observer. Viewing angle had virtually no effect on the ratings. The researchers then conducted a second experiment to see if the specific direction of averted gaze was important for feelings of sadness.
They wanted to know if looking away helps, or if looking down is characteristic of sadness. Researchers recruited a new group of 64 participants to rate sad digital characters. This time, the avatar was either looking straight ahead, looking down, or looking to the side.
The results showed that direction critically determines how grief is felt. Similar to the first test, the sad expressions when the avatar looked down became increasingly realistic. When the Avatar turned to the side, the exact opposite happened and the sadness seemed unreal.
This means that humans read specific, highly tuned social messages from different types of eye movements, rather than treating every averted gaze as a general avoidance signal. The research team noted several limitations regarding their methodology. This study utilized a static image of a forward-facing avatar, eliminating realistic head movements.
During daily interactions, humans often rotate their heads to match their eye movements. Still images also lack the sequential timing element of naturally unfolding facial expressions. The introduction of dynamic video has the potential to change the way observers interpret fleeting glances.
Additionally, the researchers generated a virtual character designed to match the physical characteristics of a white European person. We also limited participants to individuals who grew up in mostly white European countries. This design choice prevented unfamiliarity with different physical appearances from distorting the ratings, but prevented the results from being generalized to the world population.
Future research should test a greater variety of digital faces to see if these patterns also hold across different cultures. Researchers can also test human observers using automatic psychological responses such as heart rate or pupil dilation. By measuring automatic physical reactions, we may be able to capture subtle human reactions to emotions such as fear that cannot be captured through explicit conscious evaluation.
The study, “Eyes Believe You: Gaze direction influences the perceived authenticity of facial expressions displayed by computer-generated people,” was authored by Julia C. Haile, Romina Palermo, Amy Dawel, Eva G. Krumhuber, Clare Sutherland, and Jason Bell.

