Researchers at the University of Pennsylvania have introduced a new method of using artificial intelligence to tackle one of the most difficult challenges in mathematics: inverse partial differential equations (PDEs). These equations are essential to understanding complex systems, but solving them has long pushed the limits of both mathematics and computing.
The team’s solution, called “Mollifier Layers,” improves how AI handles these problems by refining the mathematics behind the process, rather than simply increasing computing power. This approach could have a wide range of applications, from deciphering genetic activity to improving weather forecasting.
“Solving an inverse problem is like looking at the ripples in a pond and working backwards to figure out where the pebble landed,” said Vivek Shenoy, Eduard D. Grant Chancellor’s Professor of Materials Science and Engineering (MSE) and lead author of the study published in the April 2006 issue. Transactions related to machine learning research (TMLR), which will be presented at the Neural Information Processing Systems Conference (NeurIPS 2026). “The effects are clearly visible, but the real challenge is to infer the hidden causes.”
Rather than relying on more powerful hardware, the researchers focused on improving the underlying mathematics. “Modern AI often advances by scaling up the amount of computation,” says Vinayak Vinayak, a doctoral candidate at MSE and co-first author of the study. “But some scientific tasks require better mathematics than just computational ability.”
Why are inverse partial differential equations important in science?
Differential equations are the backbone of scientific modeling. These explain how the system changes over time, such as population growth, heat flow, and chemical reactions.
Partial differential equations extend this idea further by capturing how systems evolve in both space and time. Scientists use them to study everything from weather patterns to how heat moves through matter and even how DNA is organized within cells.
Inverse partial differential equations go one step further. Rather than predicting outcomes based on known rules, scientists can start with observed data and work backwards to uncover the hidden forces driving that observation.
“For years, we have been using these equations to study how chromatin, the folded state of DNA in the nucleus, is organized in living cells,” Shenoy says. “But we kept running into the same problem. Although we could see the structure and model its formation, we couldn’t reliably infer the epigenetic processes that drive this system, the chemical changes that help control which genes are activated. The more we tried to optimize existing approaches, the more it became clear that the mathematics itself needed to change.”
Rethinking how AI processes complex mathematics
The key concept behind these equations is differentiation, which measures how something changes. Simple derivatives tell you how quickly something increases or decreases, but higher-order derivatives capture more complex patterns.
Traditionally, AI systems calculate these derivatives using a process called recursive automatic differentiation. This method repeatedly calculates changes as data moves through the neural networks that are the basis of modern AI.
However, this approach is challenging when dealing with complex systems and noisy data. It can be unstable and require significant computing resources.
Researchers liken this to repeatedly zooming in on a rough, jagged line. Each step amplifies imperfections, making the final result less reliable. To overcome this, the team realized they needed a way to smooth the data before analyzing it.
Relaxant layer offers a smarter solution
The answer came from a concept introduced by mathematician Kurt Otto Friedrichs in the 1940s. He described a “relaxer,” a tool designed to smooth out irregular or noisy functions.
Applying this idea, the researchers created a “mitigation layer” within the AI model. This layer smoothes the input data before calculating changes, avoiding instability caused by traditional methods.
“We initially thought the problem had to do with the architecture of neural networks,” said Ananyae Kumar Bartali, a graduate of Penn Engineering’s Master of Science in Scientific Computing program and another co-author on the paper. “But after careful tuning of the network, we ultimately discovered that the bottleneck was the recursive autodifferentiation itself.”
The results were amazing. The new method reduces noise and significantly reduces the computational cost required to solve these equations.
Implementing a “relaxation layer” to smooth the signal before measuring it significantly reduced both noise and power scaling. “This allows us to solve these equations more reliably without a similar computational burden,” Bartali says.
Uncovering the secrets of DNA organization
One of the most promising applications of this approach is in understanding chromatin, the complex structure of DNA and proteins within cells.
Although these structures function on an incredibly small scale, they play an important role in determining how genes are turned on or off.
“Although these domains are only 100 nanometers in size, they play important roles in biology and health because their accessibility determines gene expression, and gene expression governs cell identity, function, aging, and disease,” Shenoy says.
By estimating the rate of epigenetic reactions that control gene activity, a new AI method could help scientists go beyond simply observing chromatin to predicting how it changes over time.
“Tracking how these reaction rates change during aging, cancer, and development opens up new therapeutic possibilities. If reaction rates control chromatin organization and cell fate, then altering these rates could guide cells toward desired states,” Vinayak added.
Beyond biology: far-reaching scientific implications
The potential applications of the relaxant layer extend far beyond genetics. Many fields of science, such as materials research and fluid mechanics, involve complex equations and noisy data.
This new framework has the potential to provide a more stable and efficient way to uncover hidden parameters across a variety of systems.
The researchers see this as a step toward a larger goal: turning observations into deeper understanding.
“The ultimate goal is to move from observing complex patterns to quantifying the rules that generate them,” Shenoy says. “If you understand the rules that govern the system, you may be able to change it.”
This research was conducted at the University of Pennsylvania School of Engineering and Applied Sciences and was supported by National Cancer Institute (NCI) award U54CA261694 (VBS). National Science Foundation (NSF) Center for Engineering Mechanobiology (CEMB) grant CMMI -154857 (VBS); NSF grant DMS -2347834 (VBS); National Institute of Medical Imaging and Bioengineering (NIBIB) awards R01EB017753 (VBS) and R01EB030876 (VBS) and National Institute of General Medical Sciences (NIGMS) award Awarded R01GM155943 (VBS).

