Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Study suggests birth weight influences kidney recovery in ultramarathoners

    April 30, 2026

    Antibiotic minocycline shows potential to treat symptoms of panic disorder

    April 30, 2026

    BMS ‘well prepared’ to compete with Camzyos as revenue from new products outpaces traditional portfolio

    April 30, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Health Magazine
    • Home
    • Environmental Health
    • Health Technology
    • Medical Research
    • Mental Health
    • Nutrition Science
    • Pharma
    • Public Health
    • Discover
      • Daily Health Tips
      • Financial Health & Stability
      • Holistic Health & Wellness
      • Mental Health
      • Nutrition & Dietary Trends
      • Professional & Personal Growth
    • Our Mission
    Health Magazine
    Home » News » Study finds that high trust in AI makes individuals more susceptible to ‘cognitive abandonment’
    Mental Health

    Study finds that high trust in AI makes individuals more susceptible to ‘cognitive abandonment’

    healthadminBy healthadminApril 30, 2026No Comments7 Mins Read
    Study finds that high trust in AI makes individuals more susceptible to ‘cognitive abandonment’
    Share
    Facebook Twitter Reddit Telegram Pinterest Email


    A recent study published as a Wharton School research paper provides evidence that people are increasingly relying on artificial intelligence to make decisions, a phenomenon scientists are calling “cognitive capitulation.” This finding suggests that individuals tend to adopt computer-generated answers without thinking critically about them. This habit improves human accuracy when the software is correct, but significantly reduces performance when the system makes mistakes.

    Since the late 20th century, psychologists have generally divided human cognition into two distinct categories. System 1 represents immediate automatic reactions triggered by instincts and emotions. System 2 involves the deliberate, effortful reflection required to solve complex mathematical equations and consider difficult choices.

    However, the rapid rise of generative algorithms is introducing new dynamics that do not fit neatly into this traditional model. Nowadays, people frequently delegate their thinking to external software, outsourcing tasks ranging from composing emails to complex medical diagnoses.

    “If you look at how AI is used in society, it has become an ever-available cognitive partner,” said Stephen Shaw, a postdoctoral fellow at the Wharton School. “While much of the public debate has focused on whether AI models are accurate, biased, or capable, we thought the human question was missing: What happens to our own reasoning if we can easily outsource our thinking?”

    Shaw said the project evolved from observing real-world patterns in everyday life. “People don’t just ask AI for information, they often let AI structure their thoughts, explanations, and decisions,” he explained.

    To address this, scientists proposed trisystem theory, which added artificial cognition as a third thinking system. “From a theoretical perspective, we build on dual-process theory and introduce a tri-system theory of cognition, which adds System 3, artificial cognition, to the existing Systems 1 (intuitive) and 2 (deliberative),” Shaw said.

    “We define and characterize System 3 in our paper as external, automated, data-driven, and dynamic,” Shaw continued. “Establishing the existence of System 3 will embed AI into the human cognitive structure (what we call the ‘triadic cognitive ecology’). ”

    To test this theory, researchers separated the concept of strategic support from complete dependence. cognitive off road Occurs when a person uses a tool such as a calculator to aid their reasoning. In contrast, cognitive Surrender This happens when a person completely relinquishes mental control and adopts the algorithm’s judgment as their own.

    For the first study, scientists recruited 359 participants in a laboratory setting, in addition to 81 online participants to ensure robust results. Volunteers completed seven logic puzzles designed to intuitively lead to incorrect answers instantly. Arriving at the right solution required effort and analytical thinking to override the initial gut reaction.

    Participants were randomly divided into two groups, one working independently and the other given access to the chatbot. For those who had access to the chatbot, the scientists secretly manipulated the software to suggest correct answers for some puzzles and confidently incorrect answers for others.

    “Since the use of AI was optional in our study, we did not know how often participants would actually utilize it,” Shaw said. “We were shocked by both the overall usage rate (over 50% of trials) and the high follow-up rate after participants opened the chat (over 90% followed correct AI advice and ~80% followed incorrect AI advice, conditional on chat use, Study 1 statistics).”

    When the software provided the correct answer, participants’ accuracy jumped to 71 percent. In comparison, participants who worked without assistance had an accuracy of about 46 percent. When the algorithm provided incorrect advice, human accuracy plummeted to about 31%. Access to the chatbot increased participants’ confidence in their answers, even if the advice was completely wrong.

    The scientists found that participants who reported higher levels of general trust in technology were more likely to succumb to false suggestions. People who naturally enjoy deep thinking, a trait called need for cognition, were more successful in recognizing and rejecting erroneous output. Participants with more fluid intelligence and the ability to solve unfamiliar problems were also more resistant to cognitive surrender.

    To see how environmental factors change these patterns, the researchers conducted a second experiment with 485 participants. Although everyone had access to the assistant, half of the participants were given a strict time limit of 30 seconds for each puzzle. Although overall accuracy generally decreased due to time constraints, reliance on the algorithm remained strong.

    In a third experiment involving 450 participants, the scientists tested whether financial incentives and immediate performance feedback could reduce cognitive surrender. Half of the participants earned a 20 cent cash bonus and received an instant notification letting them know if their submitted answer was correct or incorrect.

    These rewards and feedback loops encouraged participants to pay attention and double-check the software’s behavior. The rate at which participants rejected incorrect advice doubled from 20 percent to 42 percent. Despite this improvement, cognitive capitulation continued widely, as many incentivized participants still accepted incorrect answers.

    The researchers combined data from all three experiments to estimate the overall strength of this effect. This final synthesis included 1,372 participants and 9,593 individual puzzle trials. A large dataset confirms that human accuracy is consistently proportional to the quality of the algorithm output.

    Although this study provides detailed insight, the experiment relied on a specific logic puzzle in a highly controlled setting. “These are controlled experiments using structured inference tasks, so they are a clear demonstration of the phenomenon rather than a complete map of real-world AI use,” Shaw explained.

    He added that cognitive surrender is not inherently negative. “Cognitive surrender is not the same as saying AI is bad or that its use is irrational. In many situations, AI can improve judgment,” Shaw said. “The key issue is coordination: knowing when the AI ​​is helping you think and when the AI ​​is quietly doing the thinking for you.”

    “We believe that the allure and sycophantic nature of modern LLMs in particular can often lead users into cognitive surrender without realizing it,” he continued. To be clear, LLM, or Large-Scale Language Model, is the underlying system that powers modern chatbots.

    Shaw also highlighted specific approaches for future research in this area. “The methodological point for researchers looking to study cognitive surrender is that showing people an ‘AI-generated answer’ (i.e., a hypothetical AI answer) in a vignette is not the same as letting people decide whether, when, and how to consult a live AI assistant,” he noted.

    “Effective research should use real-life optional instances of LLM alongside tasks, so researchers can observe whether people open chats, what questions they ask, and whether they follow or override their answers,” Shaw added.

    “Experimentally explaining cognitive surrender requires experimentally controlling/randomizing the accuracy of the AI ​​output with respect to only the specific items/configurations of interest in the study, while leaving all other elements of the LLM unconstrained,” Shaw explained. This allows scientists to measure true human behavior in a realistic digital environment.

    Looking ahead, the researchers plan to expand their investigation. “The next step is to use field research to study cognitive surrender in naturalistic, higher-risk settings, and consider medical, legal, and educational settings,” Shaw said. “We also want to identify interventions that preserve the benefits of AI while reducing uncritical reliance on it, both on the user side and on the interface design side.”

    This study provides practical lessons for everyday users. “Although AI is extremely useful, our findings suggest that people can enter what is called ‘cognitive surrender’, a state in which they adopt AI outputs with minimal scrutiny even when they are wrong,” Shaw explained.

    “Cognitive surrender is adaptive and improves the accuracy and speed of reasoning, but it also connects human decision-making to System 3 and shifts agency to AI.Practically speaking, we need to carefully consider in what situations and domains we accept a reduction or loss of agency,” he said. “If we want to protect skills and critical thinking, users must first form their own answers based on intuition and deliberation, and then use AI models to challenge, refine, and extend their thinking, rather than replace it.”

    The study, “Thinking – Fast, Slow, Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender,” was authored by Steven D Shaw and Gideon Nave.



    Source link

    Visited 1 times, 1 visit(s) today
    Share. Facebook Twitter Pinterest LinkedIn Telegram Reddit Email
    Previous ArticlePresident Trump withdraws Casey Means’ nomination for Surgeon General
    Next Article BMS ‘well prepared’ to compete with Camzyos as revenue from new products outpaces traditional portfolio
    healthadmin

    Related Posts

    Study finds that regular sex is associated with fewer daily menopausal symptoms

    April 30, 2026

    Science debunks the fashion myth that vertical stripes always make you slimmer

    April 30, 2026

    Overcoming Treatment-Resistant Depression: 2026 Evidence-Based Psychiatric Advances

    April 30, 2026

    Study finds that subconscious surrender to God predicts long-term addiction recovery

    April 30, 2026

    Study finds gold mining strongly associated with psychopathy and dark personality traits

    April 30, 2026

    Subtle changes in daily routines may signal Alzheimer’s risk years before memory loss

    April 30, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Categories

    • Daily Health Tips
    • Discover
    • Environmental Health
    • Exercise & Fitness
    • Featured
    • Featured Videos
    • Financial Health & Stability
    • Fitness
    • Fitness Updates
    • Health
    • Health Technology
    • Healthy Aging
    • Healthy Living
    • Holistic Healing
    • Holistic Health & Wellness
    • Medical Research
    • Medical Research & Insights
    • Mental Health
    • Mental Wellness
    • Natural Remedies
    • New Workouts
    • Nutrition
    • Nutrition & Dietary Trends
    • Nutrition & Superfoods
    • Nutrition Science
    • Pharma
    • Preventive Healthcare
    • Professional & Personal Growth
    • Public Health
    • Public Health & Awareness
    • Selected
    • Sleep & Recovery
    • Top Programs
    • Weight Management
    • Workouts
    Popular Posts
    • the-pros-and-cons-of-paleo-dietsThe Pros and Cons of Paleo Diets: What Science Really Says April 16, 2025
    • 1773313737_bacteria_-_Sebastian_Kaulitzki_46826fb7971649bfaca04a9b4cef3309-620x480.jpgHow Sino Biological ProPure™ redefines ultra-low… March 12, 2026
    • Improve Mental Health10 Science-Backed Practices to Improve Mental Health… March 11, 2025
    • pexels-david-bartus-442116The food industry needs to act now to cut greenhouse… January 2, 2022
    • 1773729862_TagImage-3347-458389964760995353448-620x480.jpgDespite safety concerns, parents underestimate the… March 17, 2026
    • How Healthy Living Is Transforming Modern Wellness TrendsHow Healthy Living Is Transforming Modern Wellness… December 3, 2025

    Demo
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss

    Study suggests birth weight influences kidney recovery in ultramarathoners

    By healthadminApril 30, 2026

    A new study raises the question whether there is a biological limit to human endurance…

    Antibiotic minocycline shows potential to treat symptoms of panic disorder

    April 30, 2026

    BMS ‘well prepared’ to compete with Camzyos as revenue from new products outpaces traditional portfolio

    April 30, 2026

    Study finds that high trust in AI makes individuals more susceptible to ‘cognitive abandonment’

    April 30, 2026

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    HealthxMagazine
    HealthxMagazine

    At HealthX Magazine, we are dedicated to empowering entrepreneurs, doctors, chiropractors, healthcare professionals, personal trainers, executives, thought leaders, and anyone striving for optimal health.

    Our Picks

    Study finds that high trust in AI makes individuals more susceptible to ‘cognitive abandonment’

    April 30, 2026

    President Trump withdraws Casey Means’ nomination for Surgeon General

    April 30, 2026

    Waystar boasts strong growth and strategic AI investments in Q1

    April 30, 2026
    New Comments
      Facebook X (Twitter) Instagram Pinterest
      • Home
      • Privacy Policy
      • Our Mission
      © 2026 ThemeSphere. Designed by ThemeSphere.

      Type above and press Enter to search. Press Esc to cancel.