Recent research published in journals consciousness and cognition We discovered that interacting with our artificially intelligent partners changes our sense of control in unexpected ways. When people work on a task with a virtual agent capable of taking action, they consciously feel less responsibility for the outcome, but unconscious brain activity shows increased tracking of their own actions. This suggests that the human mind adapts to the presence of digital partners in the same way it adapts to other people.
The scientific concept of “sense of agency” refers to the feeling that a person is the direct cause of the events happening around him. For example, when you turn on a light switch and the room lights up, you naturally feel a sense of ownership over the action and its outcome. Past research has shown that this feeling tends to subside when others are present and available for action.
This weakening is similar to the bystander effect. Individuals in a crowd feel less responsible for helping in an emergency because they assume someone else will intervene. This creates a diffusion of responsibility and spreads the mental burden of taking action among the group. Researchers at the University of East Anglia wanted to know whether this same psychological diffusion would occur in an online environment, where the bystander is a virtual artificial agent.
“This study is based on the phenomenon of the ‘bystander effect,’ which suggests that people are less likely to take action when there are others around them who can take action,” said study author Anne H. Lee. “There is less of a sense of responsibility for taking action in these social situations, and there is a diffusion of responsibility.”
In addition to measuring direct feelings of control, the scientists also wanted to measure unconscious feelings of control. They did this by examining the temporal binding effect, a mental illusion in which people perceive the time between a voluntary action and its outcome to be much shorter than it really is. The researchers sought to understand whether using a computer program changes both this hidden timing perception and humans’ direct judgments of their own control.
To test these ideas, the researchers set up two online experiments. In the first experiment, 123 participants worked on a computer task in which a shape on a screen gradually expanded. Participants must press a key to stop the shape before it turns red, in which case they will lose a large amount of points.
Participants completed this task under a variety of conditions. In one scenario, they worked completely alone. Another scenario introduced a virtual partner named Bobby, represented by a smiling digital face on a computer screen.
Participants were told that Bobby was an artificial partner and that they could press a button to stop the shape from expanding. Bobby was programmed to intervene only if the shape became dangerously large. This mimicked a shared situation where either a human or a machine could be responsible for completing a task.
After the shape stopped, participants heard a sound and watched the shape change color. We then asked them to estimate the exact amount of time that elapsed between tone and color changes by holding down the spacebar. Finally, we used a digital slider to rate on a scale of 0 to 100 how much control they explicitly felt they had over the outcome.
“We adopted a paradigm where participants had to press a button to stop the circle from expanding without losing points. Participants worked alone or with an artificial partner, Bobby. If they collaborated with Bobby, Bobby could also act to stop the circle from expanding. And importantly, if no one acted, participants would lose most of their points, thus mimicking a distributed responsibility scenario.”
The data showed that when working with a virtual partner, participants rated their sense of direct control as lower than when working alone. They consciously felt less responsibility for the outcome. This suggests that the presence of artificial agents caused a diffusion of responsibility in their consciousness.
At the same time, implicit measures revealed exactly the opposite pattern. With Bobby present, participants felt that the time between their actions and results was significantly shorter than when working alone. This increase in temporal coupling provides evidence that their unconscious sense of agency actually became stronger when competing with an artificial partner.
Scientists conducted a second online experiment with 102 new participants to see if the mere visual presence of a digital partner could cause these psychological changes. They used the exact same shape-stopping task, but added a new condition called Being Observed. In this new setup, Bobby’s avatar was displayed on the screen, but participants were informed that the artificial agent could only look and take no action.
The remaining steps were the same: participants estimated the time interval and rated their sense of conscious control. The results of the second experiment replicated the first experiment, showing that when Bobby was allowed to act, conscious control decreased and unconscious temporal binding increased. However, when Bobby was simply observing, the participants’ sense of agency was completely consistent with when they were working completely alone.
This shows that human psychology does not change just by looking at a digital face. Instead, artificial agents must have real abilities to interfere with the human brain’s task of regulating our sense of agency. The researchers propose that the brain subconsciously enhances its tracking of actions to clearly distinguish between what a human has done and what a machine might do.
The findings prove that “virtual artificial agents can actually impact a human’s sense of agency in human-machine interactions in two ways,” Le told PsyPost. “If an artificial agent is also capable of taking action, we clearly feel less agency because we think about the actions that such an artificial agent might take. This in turn impedes our own decisions about whether to act or not.”
“At the same time, we have an implicit system (temporal binding) that is reinforced to help us distinguish ourselves and our actions from those performed by others, in this case our artificial partners. The inter-binding effect, or implicit agency, increases, meaning the distinction between self and other without us being consciously aware of it. As a result, the sense of agency is flexible and adapts to social situations (even those involving online artificial partners).
This study refutes the assumption that humans view software and robots as mere tools that do not influence our inner psychology. This study provides evidence that humans actually process the behavior of artificial agents in a way that closely resembles human social interactions. Even though the participants knew that Bobby was just code, their minds were still distributing responsibility to the program.
“In terms of practical significance, this shows that even obviously artificial and ‘made-up’ online partners (but if the circle is too wide and no one stops us, we would feel ‘sad’, as if Bobby had ‘feelings’), it can interfere with our sense of agency,” the researchers noted. “However, this is conditional on whether the artificial partner can also act independently.”
“If artificial partners merely exist but cannot act, they do not influence the sense of agency,” the researchers explained. “We interact with online artificial systems (Siri, Alexa, etc.) more than ever every day. This suggests that our sense of agency may be relaxed while interacting with such systems, as if we were interacting with other humans (although this study did not test working with other human partners).”
One of the limitations of this study is that it only tested human interaction with digital avatars and did not have a direct comparison group working with another real human. The scenario was also relatively simple and limited to an online environment. It remains unclear exactly how these psychological changes will play out in physical spaces with advanced robot partners.
In future studies, the scientists plan to investigate these dynamics in large group settings. They want to investigate what happens to our sense of control when multiple human participants and multiple artificial agents all collaborate on a task. Adding more individuals to the mix tends to complicate the way the brain tracks behavior and assigns responsibility.
“What if there is one or more artificial partners, or different people, for example, a triad (or more) that includes the participant and two (or more) other agents (humans or artificial partners, or both), and anyone can act?” Lu said.
It will be interesting to see how such group dynamics affect the sense of agency, perhaps further complicating the findings. ”
“Special thanks to Dr Tom Burke, who was the main driving force behind this research, and Professor Andrew Bayliss for his supervision,” she added.
The study, “Working with online artificial partners enhances implicit agency and reduces explicit sense of agency,” was authored by Anh H. Le, Thomas Burke, and Andrew P. Bayliss.

