Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Threatening men’s masculinity doesn’t make them more politically conservative, new study finds

    May 12, 2026

    Bicara Therapeutics hires Replimune from Sanofi as chief commercial officer

    May 12, 2026

    This bizarre giant dinosaur may change what we know about Jurassic giants

    May 12, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Health Magazine
    • Home
    • Environmental Health
    • Health Technology
    • Medical Research
    • Mental Health
    • Nutrition Science
    • Pharma
    • Public Health
    • Discover
      • Daily Health Tips
      • Financial Health & Stability
      • Holistic Health & Wellness
      • Mental Health
      • Nutrition & Dietary Trends
      • Professional & Personal Growth
    • Our Mission
    Health Magazine
    Home » News » Scientists seek AI that can be explained with protein language models
    Discover

    Scientists seek AI that can be explained with protein language models

    healthadminBy healthadminMay 12, 2026No Comments6 Mins Read
    Scientists seek AI that can be explained with protein language models
    Share
    Facebook Twitter Reddit Telegram Pinterest Email



    Protein language models are artificial intelligence tools that help us design proteins with useful properties, including entirely new structures never before seen in nature.

    This technology has great potential to address global challenges, such as synthesizing enzymes that can absorb carbon dioxide from the atmosphere and building catalysts that can significantly reduce energy use and harmful waste byproducts in industrial processes.

    Even though many of these models are beginning to shape real-world decision-making in biotechnology, major challenges remain. Protein language models (pLMs) operate primarily as black boxes, making it difficult to understand their decision-making processes and determine whether their predictions are reliable, biased, or safe to apply in the real world.

    In the new perspective paper published today, nature machine intelligenceresearchers at the Center for Genome Regulation (CRG) are analyzing how “explainable AI,” the techniques and methods that enable humans to understand, trust, and interpret technology decisions, are currently being applied to protein language models.

    “Protein language models are advancing rapidly, but our understanding of fundamental biological processes such as folding and catalysis has not progressed in parallel with these breakthroughs,” says Dr. Noelia Fels, group leader at CRG and corresponding author of the paper.

    “In some ways, we’ve even lost some of the transparency that characterized physically-based models. Without better ways to explain what these models learn and how they make decisions, we risk building powerful tools that we can’t fully trust,” added Dr. Felts.

    The authors also call for action from the research community to make protein design systems more transparent, reliable, and safe. “If we want protein language models to become reliable partners in discovery and design, explainability must not be an afterthought,” says Andrea Hanklinger, lead author of the paper.

    Four places to look when explaining PLM decisions

    The authors write that if we want to understand why an AI model made a predictive decision about the type of protein structure or property, we first need to ask where that explanation came from.

    They identify four key locations along the model’s journey that are critical to being able to explain decision-making. The first is what training data the model learned from. This could explain, for example, whether the model has a bias in not taking human genetic diversity into account, or whether there is enough data on human proteins to begin with.

    The second is the specific protein sequence given to the model. For example, in a house price prediction model, characteristics might include square meters, number of bedrooms, or location. In the context of a protein language model, it determines which amino acids or regions of a protein have the most influence on predictions.

    The third is the architecture and internal components of the protein language model itself, which is similar to opening the hood of a car to check the engine. For protein language models, we need to check whether the artificial neurons used by the AI ​​are processing the information correctly.

    Finally, researchers can explore protein language models by tweaking them and seeing what happens. This is called input-output behavior and involves studying how the model’s answer changes if you slightly change the sequence of the protein or the question.

    What are scientists trying to accomplish when they open a “black box”?

    To understand how explainable artificial intelligence is being used in protein research today, the researchers reviewed the existing scientific literature and examined dozens of studies where explainable tools have already been applied to protein language models. This is the most comprehensive study of its kind to date.

    The authors organize a scattered body of work into a set of distinct roles that explainability can play in protein research, helping to make a technically dense field much more approachable.

    Most often, explainability is used as an “evaluation,” a way to check whether a model has learned patterns that biologists already know, such as recognizing binding sites or structural motifs.

    “Evaluators can help benchmark the quality of a model, but they cannot extrapolate unknown examples, improve the model architecture, or, more importantly, reveal biological insights gained from the training data,” says Hanklinger.

    A few studies have gone a step further and used these insights to “multitask” and reapply the learned signals to annotate new proteins or predict additional properties. The authors point out that these two roles dominate the field today, indicating that explainability is primarily used as a validation and support tool rather than a driver of discovery.

    Researchers found that a limited number of studies used explainable AI insights as “engineers” or “coaches.” This helps orient the technology to trim unnecessary components and redesign the architecture to produce protein sequences toward desired traits.

    Toward a “teacher” protein language model

    The fifth role for explainable AI in protein languages ​​is “teacher,” which stands out as the most ambitious and least realized role. This type of explainable AI can help uncover entirely new biological principles that humans were previously unaware of.

    The authors compare reaching this milestone seen in other areas of artificial intelligence, such as when AlphaZero began discovering novel chess strategies that amazed grandmasters, or when AI systems helped decipher corrupted ancient texts by recognizing linguistic patterns invisible to the human eye. At this time, technology has moved from being an efficiency tool to providing new insights.

    In protein science, reaching the teacher stage means AI systems that help researchers discover new rules for protein folding, catalysis, or molecular interactions that could change the way we design medicines, materials, and sustainable technologies.

    “For us, the real holy grail is controllable protein design. Imagine being able to tell a model, ‘Design a protein that has this shape and is active at this pH.’ Not only do you receive a candidate sequence, but you also receive a clear explanation of why that design works and, importantly, why the alternatives fail,” explains Dr. Ferruz.

    “For example, a model can explain that a particular mutation disrupts a hydrogen bond network essential for stability. Once we reach a level of control and mechanistic transparency, protein language models will move from being good generators to truly reliable design partners,” she added.

    The authors emphasize that reaching teacher status for protein language models does not happen automatically. Today’s models have powerful pattern recognition capabilities, but often rely on statistical correlations rather than true understanding. The authors claim that reliability and validation are the main concerns and that several conditions must be met.

    This paper asks the community to create robust benchmarks and evaluation frameworks to test whether explanations truly reflect the model’s inferences. We also want open source tools that make explainability accessible and comparable across labs. Most importantly, the insights gained from AI must ultimately be verified in the laboratory, turning mathematical patterns into experimentally confirmed biological knowledge.

    sauce:

    genome control center

    Reference magazines:

    DOI: 10.1038/s42256-026-01232-w



    Source link

    Visited 1 times, 1 visit(s) today
    Share. Facebook Twitter Pinterest LinkedIn Telegram Reddit Email
    Previous ArticleGenetic mapping reveals powerful immune-brain link in depression
    Next Article Researchers reveal sex-specific metabolic patterns in eye tissue
    healthadmin

    Related Posts

    INDIGO Biosciences expands reporter assay capabilities with new transrepression assay services

    May 12, 2026

    Cannabis compounds may improve metabolism and reduce diabetes risk

    May 12, 2026

    Common tuberculosis screening test may predict long-term survival of patients

    May 12, 2026

    Researchers reveal sex-specific metabolic patterns in eye tissue

    May 12, 2026

    Genetic mapping reveals powerful immune-brain link in depression

    May 12, 2026

    Study finds doctor mothers return to work sooner than many Canadian parents

    May 11, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Categories

    • Daily Health Tips
    • Discover
    • Environmental Health
    • Exercise & Fitness
    • Featured
    • Featured Videos
    • Financial Health & Stability
    • Fitness
    • Fitness Updates
    • Health
    • Health Technology
    • Healthy Aging
    • Healthy Living
    • Holistic Healing
    • Holistic Health & Wellness
    • Medical Research
    • Medical Research & Insights
    • Mental Health
    • Mental Wellness
    • Natural Remedies
    • New Workouts
    • Nutrition
    • Nutrition & Dietary Trends
    • Nutrition & Superfoods
    • Nutrition Science
    • Pharma
    • Preventive Healthcare
    • Professional & Personal Growth
    • Public Health
    • Public Health & Awareness
    • Selected
    • Sleep & Recovery
    • Top Programs
    • Weight Management
    • Workouts
    Popular Posts
    • 1773313737_bacteria_-_Sebastian_Kaulitzki_46826fb7971649bfaca04a9b4cef3309-620x480.jpgHow Sino Biological ProPure™ redefines ultra-low… March 12, 2026
    • the-pros-and-cons-of-paleo-dietsThe Pros and Cons of Paleo Diets: What Science Really Says April 16, 2025
    • pexels-david-bartus-442116The food industry needs to act now to cut greenhouse… January 2, 2022
    • 1773729862_TagImage-3347-458389964760995353448-620x480.jpgDespite safety concerns, parents underestimate the… March 17, 2026
    • Improve Mental Health10 Science-Backed Practices to Improve Mental Health… March 11, 2025
    • 1773209206_futuristic_techno_design_on_background_of_supercomputer_data_center_-_Image_-_Timofeev_Vladimir_M1_4.jpegMulti-agent AI systems outperform single models… March 11, 2026

    Demo
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss

    Threatening men’s masculinity doesn’t make them more politically conservative, new study finds

    By healthadminMay 12, 2026

    Recent research published in Journal of Experimental Politics This suggests that threatening men’s sense of…

    Bicara Therapeutics hires Replimune from Sanofi as chief commercial officer

    May 12, 2026

    This bizarre giant dinosaur may change what we know about Jurassic giants

    May 12, 2026

    AI-designed drug reduces fentanyl consumption in animal models by targeting serotonin receptors

    May 12, 2026

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    HealthxMagazine
    HealthxMagazine

    At HealthX Magazine, we are dedicated to empowering entrepreneurs, doctors, chiropractors, healthcare professionals, personal trainers, executives, thought leaders, and anyone striving for optimal health.

    Our Picks

    AI-designed drug reduces fentanyl consumption in animal models by targeting serotonin receptors

    May 12, 2026

    Scientists discover hidden chemical signature that could reveal extraterrestrial life

    May 12, 2026

    PCOS is now called PMOS. The renaming process lasted 10 years.

    May 12, 2026
    New Comments
      Facebook X (Twitter) Instagram Pinterest
      • Home
      • Privacy Policy
      • Our Mission
      © 2026 ThemeSphere. Designed by ThemeSphere.

      Type above and press Enter to search. Press Esc to cancel.