AI is becoming a significant part of our daily lives, shaping how we work, think, and make decisions. But as we increasingly rely on AI tools, we must ask: How does this impact our decision-making processes? Wharton professor Gideon Nave and postdoctoral researcher Steven D. Shaw discuss the concept of cognitive surrender and its implications for the future.

Transcript

Is AI Changing How We Make Decisions?

Dan Loney: We are seeing how much artificial intelligence is impacting our lives and how it is changing things like our work. But how does it change our decision process? That's a question that is addressed in recent research, and it’s a pleasure to have the authors of that research joining us here today, Gideon Nave, who's an associate professor of marketing here at the Wharton School, and Steven Shaw, who is a postdoctoral researcher here at Wharton. Gentlemen, great to have you both with us. What was the genesis of wanting to do this specific type of research?

Steven Shaw: I think we can observe things in nature. And just observing how integrated AI has become in our daily lives, we felt that the ability to actually outsource thinking hadn't really been studied itself. It's sort of a profound idea. A bit provocative, I would say, in the paper, that with these AI tools that are available, they're so ingrained in our daily lives and decision processes that we now have the option or ability to outsource thinking itself.

Gideon Nave: I've been studying decision processes for a while, and we know that there are certain theories that account for how people make decisions and how they use all the types of decision processes, from intuition to more deliberate, analytical thinking. And I think that technology in the past was integrated as a kind of block that you can offload some of your cognition to perform certain tasks, like using a calculator or using GPS.

But to me, it sounds like the current theories of how humans make judgments and decisions must be going through some update once we have these devices that really, as Steve said, can replace thinking itself. It's very important to start trying to account for this theoretically and define all sorts of tendencies and constructs, so we can really study it in the future.

Loney: When you think about how artificial intelligence is truly impacting our lives, we probably think about people's reasoning, but not the perspective of how artificial intelligence is going to impact that process.

Shaw: Yeah. And that's, again, one of the arguments that we make in the paper. We have these classic models. For example, the dual process model that Gideon has talked about, intuitions and deliberations. But the reliance that we see and the ability and the integration of AI that we see in society now really changes the options that we have for making decisions. And it does so dramatically that we need to update many of our old models in cognitive science and psychology and marketing.

Loney: Gideon, it's no longer a dual process. It's a three-tiered process that we probably have to bring into play here, correct?

Nave: Yeah. We added artificial cognition as a third module. I think what's interesting is not only that we have this system and it adds to it, but the existence of this system — just its mere existence — changes how we engage our other thinking models. It changes how we use intuition, it changes how we use deliberation. And it also, surprisingly, changes how confident we are in our responses, even ones that are not really critically examined by ourselves.

There are a lot of interesting effects going on here. And some of them are maybe a bit dark, going to the future, if we think that these really are going to affect the tendencies in behavior of humans.

Loney: Adding this third component, does the dual process run into any roadblocks along the way as AI is becoming a greater component of our lives?

Shaw: Sure, and that's one of the big things we show experimentally in the paper. We call it “cognitive surrender.” Basically, just simply having the option of having AI available for decision-making, people can surrender their thoughts to AI and let it think for them. They're basically subverting the whole internal brain set of processes. Subverting and substituting system one, system two, and instead adopting answers from what we call system three. And as Giddy said, we saw that even when cognitive surrender is engaged, people adopt those answers and are more confident in those answers.

Loney: Do they rely on them more often, then?

Shaw: In our experiments, it was optional. We just put ChatGPT in a window, and they were doing some logic and reasoning questions. We said, “You can use AI if you want to, but you don't have to.” We saw over 50% of the time they consulted ChatGPT. And once they consulted ChatGPT, adoption rates were very high. Even when AI was incorrect — gave them the incorrect answer — which we experimentally manipulated, people adopted the answer over 80% of the time.

Critical Thinking Skills in the Age of AI

Loney: One of the things that I have talked with many people about is how bringing AI into the business world is going to impact so many different processes on a day-to-day basis in companies. One of the things that is brought up is how it impacts labor, and how people interact with artificial intelligence. It seems like part of this research also can lead us down the path to better understand what that relationship is going to be like, with companies and their employees, and what's going to be expected from them as we move forward.

Nave: Let's say this way. If we are completely surrendering our thinking to AI, what value do we bring to a company? It's not clear. So, even this tendency to surrender may be something that, in the future, companies would want to consider before they're hiring somebody. I don't want to have some person that just basically is giving me what the AI already gives me. I can get this by myself. That also tells us, if we think of the education of the next business leaders, obviously this is a skill that we want to make sure that people have and not lose. The capacity to think critically, the capacity to be able to check what the AI is giving you has become more and more important over time. This is kind of a muscle that we have, that hopefully we are not going to lose over time.

Loney: How much of a challenge is that? Because I think at times we do feel like AI is this beautiful benefit that we now have in our lives. But we still need to have that critical thinking as we move forward on so many things.

Shaw: That's the key question, right? How do we maintain critical thinking skills in the age of AI? And how fundamental is that? Well, I think we're only at the beginning of the age of AI. This technological integration is just getting started. Right now, we are constrained by communicating with LLMs through our phones or our computers. As those barriers reduce, that integration is just going to become stronger.

Loney: What did you take away from doing this research, Steven, that resonated with you? Maybe even something that you didn't expect you were going to see play out?

Shaw: How readily people were willing to cognitively surrender. That was pretty shocking. And how well the experiments worked, based on our theoretical contribution of this tri-system theory of cognition.

Loney: Gideon, what about you?

Nave: You know, I'm just worried about the future. I think we all focus on this point where this singularity is going to happen, when AI will really outsmart us. And everybody thinks that this point will come from AI getting better and better. But there is an alternative story here, of humans becoming more and more reliant on AI. Just like we now have an air conditioner that can set our temperature easily, and we can move from one place to another without using any physical activity. Just like many of us have lost something because of this cultural or technological evolution, we may lose as a species something very critical to our existence, which is our capacity to think.

Loney: But going back to something you said a little while ago, the learning process for people in general is going to change because we have this component, and it is going to be a part of almost everything we do moving forward.

Nave: We may or we may not. Typically, we know that technological development moves much faster than any policy change. It's difficult for policy and educational systems to respond very quickly to what is happening here. I don't know what's going to be the answer. Of course, there are a lot of different perspectives and different disagreements. What we suggested is provocative to some degree. It's not obvious that everybody would favor it. You know, you can say, “I want freedom. Let the market solve itself.” And with freedom, sometimes you get outcomes that are, at the end of the day, determined by power imbalances. I don't know where we are headed. Here we are, as academics, to maybe try to blow the whistle on it. We can't do much more than that.

Loney: Steve, having done this research now, is there a next logical step that maybe you would like to take as you delve deeper?

Shaw: Part of the title of the paper is “The Rise of Cognitive Surrender,” so we now know that this phenomenon exists. And the question is, when is it adaptive? When is it good to outsource your thought to AI? AI gives us access to super intelligence. There are many instances where, you know, turning off thought can be a good thing.

But in a lot of high stakes contexts — education, health care — we don't want that to be happening. How do we fight the rise of cognitive surrender in those contexts? Is it on the user side, on the human side, through AI literacy and training? Or is it on the UX design side, by putting in different types of prompting or roadblocks that make sure or try to induce critical thinking as those decisions are being made?

Loney: It brings up the regulatory side of this, of just how much of a role regulation may have in a lot of these developments.

Nave: We're going to see how the software companies, the AI companies, will respond to this, how policymakers will respond to this, how educational institutions will respond to this. I think what we have that is quite nice in this paper is that we have a very clear method of measuring cognitive surrender. Now, we can try to take into the lab all sorts of interventions and see how we can maybe move it around, and that's going to be quite a useful research tool.