Motor imagery (MI) is the mental process of imagining certain limb movements, such as raising a hand or walking, without physically performing them. These imagined movements generate distinct patterns of brain activity that can be recorded using electroencephalography (EEG).
By decoding these signals, researchers can enable direct communication between the brain and computer, making MI-EEG a powerful tool for applications such as motor rehabilitation and assisted control of wheelchairs and prosthetics.
However, the EEG signals generated during MI vary widely between individuals and over time. Traditional MI-EEG methods are often unable to capture and decode complex patterns in EEG signals, especially the dynamic spatial and temporal variations inherent in brain activity.
A breakthrough by Japanese researchers is closing this gap and enabling more seamless communication between human brains and machines. PhD student Chaowen Shen and Professor Akio Namiki from Chiba University Graduate School of Science and Engineering have developed a new artificial intelligence-based framework called Embedding-Driven Graph Convolutional Network (EDGCN).
This approach utilizes a spatiotemporal embedding fusion mechanism to decode the dynamic fluctuations of EEG patterns, improving its adaptability and versatility compared to traditional brain-computer interface (BCI) techniques.
Deciphering MI-EEG is not only an engineering challenge but also a window into understanding the neural mechanisms of MI and the functional connectivity of the brain. We hope to design more efficient models that can facilitate our understanding and use of the human brain. ”
Akio Namiki (Professor, Graduate School of Science and Engineering, Chiba University)
Their findings will be published online on January 22, 2026, in volume 131 of the journal. information fusion July 1, 2026.
Traditional machine learning models employ common spatial patterns and predefined graph structures and rely heavily on expert knowledge. This structural rigidity limits the ability to fully capture complex patterns hidden within brain signals and increases computational costs. Recently, convolutional neural networks and graph convolutional networks have shown good performance in decoding EEG signals.
However, traditional deep learning approaches can still struggle to fully capture the complex interactions between different brain regions and dynamically evolving individual variation. To overcome this limitation, researchers are increasingly turning to graph-based techniques to better represent network-like activity in the brain.
EEG signals record brain activity from multiple electrodes over time, providing multichannel time series data. To capture changes in patterns over different time scales, the researchers designed a local feature extraction module that allows the model to process EEG signals through multiple parallel paths. Because EEG signals are recorded at different points in time, important points in brain activity may be missed.
To overcome this challenge, the researchers employed a multiresolution time embedding strategy that allows the model to analyze signals at multiple time scales by increasing or decreasing resolution. This approach allows for a more consistent and comprehensive understanding of dynamic brain activity.
Furthermore, they introduced a structure-aware spatial embedding mechanism that connects “local” (structurally close) and “global” (functionally connected) brain channels and captures the synchronization of brain activity across different regions. This approach allows for a more detailed spatial representation of dynamically changing short- and long-range interactions during MI tasks.
Finally, to verify the effectiveness of the model, the researchers conducted a series of MI classification experiments using public datasets. In particular, our model outperformed current state-of-the-art methods, achieving superior classification accuracy of 86.50% and 90.14% and higher decoding accuracy of 64.04%. Furthermore, removing spatial and temporal adaptations degraded model performance, suggesting that these embeddings are important for capturing the complex spatiotemporal heterogeneity of the EEG signal.
Overall, the proposed EDGCN model showed significant advantages in decoding heterogeneous MI-EEG signals. In the future, the researchers hope to extend their application to real-world portable BCI hardware and rehabilitation scenarios. Additionally, since EEG signals contain sensitive biometric information, it is important to develop advanced encryption strategies to defend against security attacks.
“EDGCN’s high decoding accuracy and generalization capabilities will drive the commercialization of consumer-grade BCI products. Patients with stroke, spinal cord injury, amyotrophic lateral sclerosis, and other movement disorders can be assisted with stable control of neurorehabilitation devices such as wheelchairs, prosthetic limbs, and upper limb rehabilitation robots through simplified MI,” concludes Professor Namiki.
sauce:
Reference magazines:
Shen, C. Others. (2026) EDGCN: An embedding-driven fusion framework for heterogeneity-aware motion image decoding. information fusion. DOI: 10.1016/j.inffus.2026.104170. https://www.sciencedirect.com/science/article/pii/S1566253526000497.

