Artificial intelligence consumes enormous amounts of electricity in the United States. According to the International Energy Agency, AI systems and data centers used approximately 415 terawatt-hours of electricity in 2024. This accounts for more than 10% of the country’s total electricity production, and demand is expected to double by 2030.
This rapid growth has raised concerns about sustainability. In response, researchers at the School of Engineering have created a proof-of-concept AI system designed to be much more efficient. Their approach has the potential to reduce energy usage by up to 100 times, while also improving task performance.
A hybrid approach called neurosymbolic AI
This research comes from the laboratory of Karol Family Applied Technology Professor Matthias Scheitz. His team is developing neurosymbolic AI, which combines traditional neural networks and symbolic reasoning. This method reflects the way people approach problems by breaking them down into steps and categories.
The findings will be presented at the International Conference on Robotics and Automation in Vienna in May and will be published in the conference proceedings.
Teach robots to see, understand, and act
Unlike well-known large-scale language models (LLMs) such as ChatGPT and Gemini, the team focuses on AI systems used in robotics. These systems are known as Visual Language Action (VLA) models. Expand the capabilities of LLM by incorporating vision and physical movement.
VLA models take visual data from cameras and instructions from language and convert that information into real-world actions. For example, you can control the robot’s wheels, arms, and fingers to complete tasks.
Why traditional AI struggles with simple tasks
Traditional VLA systems rely heavily on data and learning through trial and error. If a robot is asked to build a tower by stacking blocks, it must first analyze the scene, identify each block, and decide how to place them correctly.
This process often leads to mistakes. Shadows can confuse the system about the shape of the blocks or cause the robot to misplace pieces and cause the structure to collapse.
These errors are similar to issues seen in LLM. Just as robots can misplace blocks, chatbots can also produce false or misleading output. Examples include fabricating a lawsuit or creating images that include unrealistic details such as extra fingers.
How symbolic reasoning improves accuracy and efficiency
Symbolic reasoning offers another strategy. Rather than relying solely on patterns in the data, it uses rules and abstract concepts such as shape and balance. This allows the system to plan more efficiently and avoids unnecessary trial and error.
“Like LLM, VLA models operate on statistical results from large training sets of similar scenarios, which can lead to errors,” Schutz said. “Neurosymbolic VLA can apply rules that limit the amount of trial and error during learning and arrive at a solution much faster. Not only can tasks be completed faster, but the time spent training the system is significantly reduced.”
Get a good score on the puzzle test
The researchers tested the system using the Tower of Hanoi puzzle, a classic problem that requires careful planning.
Neurosymbolic VLA achieved a success rate of 95%, compared with only 34% for the standard system. Even when given a more complex version of the puzzle than it had ever encountered before, the hybrid system succeeded 78% of the time. Traditional models were failing at every attempt.
Training time has also been significantly reduced. The new system completed learning tasks in just 34 minutes, whereas the previous model took more than a day and a half.
Significant energy savings during training and use
Energy consumption has also been significantly reduced. Training a neurosymbolic model requires only 1% of the energy used in a standard VLA system. The energy consumed during operation was only 5% of that required by traditional approaches.
Schuitz likened this inefficiency to everyday AI tools. “These systems are only trying to predict the next word or action in a sequence, but that can be imperfect, leading to inaccurate results or illusions. The energy consumption is often disproportionate to the task. For example, when you search on Google, the AI summary at the top of the page uses up to 100 times more energy than generating a list of websites.”
The growing burden of AI on power infrastructure
As AI adoption accelerates across industries, the demand for computing power continues to grow. Companies are building increasingly large data centers, some requiring hundreds of megawatts of power. Its consumption levels can exceed the needs of an entire small city.
This trend is creating a race for infrastructure expansion and raising concerns about long-term energy constraints.
A more sustainable path for AI
Researchers suggest that current approaches based on LLM and VLA may not be sustainable in the long term. Although these systems are powerful, they consume large amounts of energy and can still provide unreliable results.
In contrast, neural semiotic AI offers a different orientation. Combining learning and structured inference could provide a more efficient and reliable foundation for future AI systems.

