A recent study published as a Wharton School research paper provides evidence that people are increasingly relying on artificial intelligence to make decisions, a phenomenon scientists are calling “cognitive capitulation.” This finding suggests that individuals tend to adopt computer-generated answers without thinking critically about them. This habit improves human accuracy when the software is correct, but significantly reduces performance when the system makes mistakes.
Since the late 20th century, psychologists have generally divided human cognition into two distinct categories. System 1 represents immediate automatic reactions triggered by instincts and emotions. System 2 involves the deliberate, effortful reflection required to solve complex mathematical equations and consider difficult choices.
However, the rapid rise of generative algorithms is introducing new dynamics that do not fit neatly into this traditional model. Nowadays, people frequently delegate their thinking to external software, outsourcing tasks ranging from composing emails to complex medical diagnoses.
“If you look at how AI is used in society, it has become an ever-available cognitive partner,” said Stephen Shaw, a postdoctoral fellow at the Wharton School. “While much of the public debate has focused on whether AI models are accurate, biased, or capable, we thought the human question was missing: What happens to our own reasoning if we can easily outsource our thinking?”
Shaw said the project evolved from observing real-world patterns in everyday life. “People don’t just ask AI for information, they often let AI structure their thoughts, explanations, and decisions,” he explained.
To address this, scientists proposed trisystem theory, which added artificial cognition as a third thinking system. “From a theoretical perspective, we build on dual-process theory and introduce a tri-system theory of cognition, which adds System 3, artificial cognition, to the existing Systems 1 (intuitive) and 2 (deliberative),” Shaw said.
“We define and characterize System 3 in our paper as external, automated, data-driven, and dynamic,” Shaw continued. “Establishing the existence of System 3 will embed AI into the human cognitive structure (what we call the ‘triadic cognitive ecology’). ”
To test this theory, researchers separated the concept of strategic support from complete dependence. cognitive off road Occurs when a person uses a tool such as a calculator to aid their reasoning. In contrast, cognitive Surrender This happens when a person completely relinquishes mental control and adopts the algorithm’s judgment as their own.
For the first study, scientists recruited 359 participants in a laboratory setting, in addition to 81 online participants to ensure robust results. Volunteers completed seven logic puzzles designed to intuitively lead to incorrect answers instantly. Arriving at the right solution required effort and analytical thinking to override the initial gut reaction.
Participants were randomly divided into two groups, one working independently and the other given access to the chatbot. For those who had access to the chatbot, the scientists secretly manipulated the software to suggest correct answers for some puzzles and confidently incorrect answers for others.
“Since the use of AI was optional in our study, we did not know how often participants would actually utilize it,” Shaw said. “We were shocked by both the overall usage rate (over 50% of trials) and the high follow-up rate after participants opened the chat (over 90% followed correct AI advice and ~80% followed incorrect AI advice, conditional on chat use, Study 1 statistics).”
When the software provided the correct answer, participants’ accuracy jumped to 71 percent. In comparison, participants who worked without assistance had an accuracy of about 46 percent. When the algorithm provided incorrect advice, human accuracy plummeted to about 31%. Access to the chatbot increased participants’ confidence in their answers, even if the advice was completely wrong.
The scientists found that participants who reported higher levels of general trust in technology were more likely to succumb to false suggestions. People who naturally enjoy deep thinking, a trait called need for cognition, were more successful in recognizing and rejecting erroneous output. Participants with more fluid intelligence and the ability to solve unfamiliar problems were also more resistant to cognitive surrender.
To see how environmental factors change these patterns, the researchers conducted a second experiment with 485 participants. Although everyone had access to the assistant, half of the participants were given a strict time limit of 30 seconds for each puzzle. Although overall accuracy generally decreased due to time constraints, reliance on the algorithm remained strong.
In a third experiment involving 450 participants, the scientists tested whether financial incentives and immediate performance feedback could reduce cognitive surrender. Half of the participants earned a 20 cent cash bonus and received an instant notification letting them know if their submitted answer was correct or incorrect.
These rewards and feedback loops encouraged participants to pay attention and double-check the software’s behavior. The rate at which participants rejected incorrect advice doubled from 20 percent to 42 percent. Despite this improvement, cognitive capitulation continued widely, as many incentivized participants still accepted incorrect answers.
The researchers combined data from all three experiments to estimate the overall strength of this effect. This final synthesis included 1,372 participants and 9,593 individual puzzle trials. A large dataset confirms that human accuracy is consistently proportional to the quality of the algorithm output.
Although this study provides detailed insight, the experiment relied on a specific logic puzzle in a highly controlled setting. “These are controlled experiments using structured inference tasks, so they are a clear demonstration of the phenomenon rather than a complete map of real-world AI use,” Shaw explained.
He added that cognitive surrender is not inherently negative. “Cognitive surrender is not the same as saying AI is bad or that its use is irrational. In many situations, AI can improve judgment,” Shaw said. “The key issue is coordination: knowing when the AI is helping you think and when the AI is quietly doing the thinking for you.”
“We believe that the allure and sycophantic nature of modern LLMs in particular can often lead users into cognitive surrender without realizing it,” he continued. To be clear, LLM, or Large-Scale Language Model, is the underlying system that powers modern chatbots.
Shaw also highlighted specific approaches for future research in this area. “The methodological point for researchers looking to study cognitive surrender is that showing people an ‘AI-generated answer’ (i.e., a hypothetical AI answer) in a vignette is not the same as letting people decide whether, when, and how to consult a live AI assistant,” he noted.
“Effective research should use real-life optional instances of LLM alongside tasks, so researchers can observe whether people open chats, what questions they ask, and whether they follow or override their answers,” Shaw added.
“Experimentally explaining cognitive surrender requires experimentally controlling/randomizing the accuracy of the AI output with respect to only the specific items/configurations of interest in the study, while leaving all other elements of the LLM unconstrained,” Shaw explained. This allows scientists to measure true human behavior in a realistic digital environment.
Looking ahead, the researchers plan to expand their investigation. “The next step is to use field research to study cognitive surrender in naturalistic, higher-risk settings, and consider medical, legal, and educational settings,” Shaw said. “We also want to identify interventions that preserve the benefits of AI while reducing uncritical reliance on it, both on the user side and on the interface design side.”
This study provides practical lessons for everyday users. “Although AI is extremely useful, our findings suggest that people can enter what is called ‘cognitive surrender’, a state in which they adopt AI outputs with minimal scrutiny even when they are wrong,” Shaw explained.
“Cognitive surrender is adaptive and improves the accuracy and speed of reasoning, but it also connects human decision-making to System 3 and shifts agency to AI.Practically speaking, we need to carefully consider in what situations and domains we accept a reduction or loss of agency,” he said. “If we want to protect skills and critical thinking, users must first form their own answers based on intuition and deliberation, and then use AI models to challenge, refine, and extend their thinking, rather than replace it.”
The study, “Thinking – Fast, Slow, Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender,” was authored by Steven D Shaw and Gideon Nave.

