New research published in Frontiers of human neuroscience suggest that assistive robots may work best when they share control with the user and are at a halfway point between full automation and manual operation.
People with severe movement disorders, such as amyotrophic lateral sclerosis (ALS), often require constant assistance from a caregiver with daily tasks such as cooking, eating, and moving objects. Bodily assistive robots have the potential to restore independence, but many existing systems are limited to simple pre-programmed tasks.
Brain-robot interfaces, which allow users to control robots using brain signals, offer a promising alternative, but are often noisy, slow, and difficult to use without the help of the robot itself.
Researchers at Araya in Tokyo, led by Hannah Douglas, set out to design a system that could overcome these challenges.
Their goal was to create a shared, realistic virtual kitchen environment where two users could work together with two mobile robots to complete tasks. The user directs the robot’s movements using a combination of brain signals from electroencephalography (EEG), muscle signals from electromyography (EMG), and eye tracking.
The system needed to be flexible enough to handle a wide range of daily life tasks, from picking up dishes to moving pots and pans, yet still provide users with a meaningful sense of control.
To investigate this, the team developed three different “autonomy levels” for the robot.
In the first level, remote control assistance, the user controlled almost every step, including selecting objects, selecting actions, and operating the robot within the kitchen. “At this level, the robot primarily functions as an executor of detailed instructions,” the authors explained.
At the second level, Shared Autonomy, users still choose what they want the robot to do, but the robot handles the navigation and some of the finer details. “Users simply select landmarks with the eye tracker, and the robot moves autonomously, allowing them to focus on higher-level decision-making.”
At the third level, full automation, the user simply selects a high-level goal, such as food selection, and the robot automatically completes the entire sequence. “In this state, user input is minimal and the focus is on high-level goal selection rather than step-by-step control.”
Thirty healthy adults (9 women, mean age 31 years) participated in a controlled study comparing three modes. Although the participants were not people with disabilities, the researchers used this group to test the system’s usability and performance before moving to a clinical population.
The results showed clear differences between autonomy levels. Full automation was easiest for participants to use, required the least mental effort, and allowed them to complete the task fastest. Participants rated it highest for ease of use and lowest for workload. But this convenience came at a price. Users felt that they had no control over the robot’s movements.
In contrast, remote control assistance was the most demanding. Participants had to manage navigation, object selection, and action commands, increasing workload and reducing performance. Many people found it tiring and difficult to use.
Shared autonomy provided a middle ground. In fact, it achieved a higher task success rate than full automation (66.7% vs. 80%) while maintaining stronger ownership. Maintaining independence and personal control is especially important in assistive technology because it can empower individuals with severe motor disabilities. The researchers found that shared autonomy was more reliable when the EEG signal was noisy, a common problem in non-invasive brain computer systems. This is because shared autonomy utilizes high-precision eye tracking to offset the possibility of fatal robot errors.
“Thus, while full automation is the optimal solution from an efficiency perspective, shared autonomy represents a valuable alternative for users who prioritize reliability and individuality,” Douglas et al. concluded.
Research has limitations. For example, all participants were healthy adults. This means the results may not fully reflect the needs of people with ALS and other movement disorders.
The study, “Shared Levels of Autonomy at the Brain-Robot Interface: Enabling Multi-Robot and Multi-Human Collaboration in Daily Life Activities,” was authored by Hannah Douglas, Marina Di Vincenzo, Rousslan Fernand Julien Dossa, Luca Nunziante, Shivakanth Sujit, and Kai Arulkumaran.

