Wharton’s Stefano Puntoni believes new technologies like generative AI can play a complementary role in human lives — as soon as we overcome our fear of it. His research explores the psychology behind that fear and its roots in human identity.
Transcript
Exploring Consumer Behavior and the Rise of AI
Dan Loney: What drew your attention to the area of consumer behavior?
Stefano Puntoni: I’ve been doing work on consumer decision-making for over two decades now, and I’ve been fascinated by the topic of consumer behavior because it’s such a big part of what we do every day. Small decisions like buying a coffee or toothpaste, but also big decisions like buying a house or a car. I think it has a big role to play in our well-being, in the way we live our lives. It reflects a lot of things about who we are and who we want to be. So, I think understanding consumer behavior is a very interesting, fascinating lens into human psychology. Of course, as a business school professor, there are a lot of interesting questions to ask that can help businesses and companies do better.
Loney: We’re getting more into this intersection of consumer behavior and how AI is playing a role. How has that framed your look at where we stand right now?
Puntoni: I didn’t do any work related to AI or technology up to about 10 years ago. The first half of my career was on different topics. I got interested in this because I started reading about what AI was becoming able to do, going back to about 2014 when the first creative things started appearing, like an agent that can understand your speech or a self-driving car. I have a background in statistics, and I just got curious to know how these neuro networks were now going to suddenly be able to do these things.
It became an interest, and then it became a worry because I realized just how powerful this technology was going to be. And we’re still in early days. The implication this would have for my children, I was thinking about what kind of education they need. At that time, I was also the academic director of a big program at the university where I used to work. I was thinking, “What do we need to teach these kids?” not just so they would have a job next year, but whether they would have a job 10 years from now.
That became an area of concern and closer to my professional interests. At that point, I thought nobody’s studying this in research. We don’t know much about how people perceive AI, how AI makes them feel, what are the barriers to adoption, and what kind of concerns people have with it. I started developing this new line of research, and it’s been keeping me busy.
How Will the Rise of AI Affect Everyday Life?
Loney: You mentioned that AI is somewhat of a polarizing topic. But the hope is that this is going to become a normal part of life as we move forward, correct?
Puntoni: This is the kind of topic where it leaves no one indifferent. I never speak to anyone who says, “No, it’s not interesting, it’s not relevant, it’s not important.” Everybody recognizes it’s a momentous change — I would argue, a historical change. It will have implications for all kinds of stuff in life, from our ability to have an included society, to our ability to sustain democracy processes. On the positive, there is incredible potential of improving welfare, well-being, and the economy.
I think there is another element informed by popular culture. We grew up with movies and books that highlight the dangers of technology, from The Matrix, Terminator, and 2001: A Space Odyssey, or Blade Runner. Those kinds of stories do inform how we think about technology, especially today, where we almost seem to be living in a sci-fi movie. We’re just thinking, “Wow. We could never have thought this would happen.” We’re at that moment of upheaval and change, and I think some of the fears are bubbling up because of that.
Loney: When you think about how our identities are potentially impacted by AI, I would imagine it’s a broad scope of study.
Puntoni: I think of the general question, which is not just to ask, “What do people think of AI?” And, “How would we improve consumer beliefs or acceptance of technology?” For a tech company, that’s obviously an important question.
Maybe more important, or more interesting, is to think about how AI changes the way we think about ourselves. It’s a link to identity — our human identity and our identity in specific domains. For example, in consumption, or at work. There are a lot of things that we do in life. We don’t do them only for instrumental reasons, to get a job done. We do them partly because that’s who we are. We have hobbies, we have passions, we have ways in which we construe our personas to ourselves and to other people. And those personas are important. Technology and automation can be a threat to those personas as more and more activities can be done by machines.
Let’s say a potential stumbling block for a lot of tech deployments in organizations is that people feel threatened by it. They may not want to adopt this technology because they feel they can do it better, or because they’re afraid that now they’re irrelevant, or because they are worried about what’s coming next. Some of the resistance is partly due to this perceived threat that people may have, so communicating properly about technology is important. Also, understanding how this technology might affect people’s feelings of identity. Imagine that I am passionately into cooking. There are certain things that I do in cooking that I don’t want to be replaced by a machine. Maybe I’m baking bread, and I’m OK with using a machine that would automate the physical labor of kneading the dough, which is hard work and boring. But what I don’t want is a machine that automates my cognitive skills, my unique abilities to understand what ingredients we need and how you actually bake the bread. So, I might feel something like a bread-baking machine would be highly threatening to me.
You can think of that in a lot of different activities. Maybe you can think about it in your own life. What activities do you like? Which activities do you feel technology helps you, and sometimes can hinder you? If you are fishing. If you are a cyclist. If you are a musician. It could be a lot of different things.
How Will the Rise of AI Affect the Workplace?
Loney: Is there a concern about a loss of skills with having that level of automation?
Puntoni: You develop skills by using them. The typical saying is “use it or lose it,” and that goes also with automation. We’ve had some recent discussions in some professional contexts — for example, airline pilots or doctors — where if you don’t practice certain skills, you may not be able to perform a task well. Especially in situations where you may have a standard mode of operation that is highly automated, and the automation switches off when there is a crisis. That is a moment when you need a human agent to be on top of things and highly skilled. But if we never get to practice things, we might not be ready when the moment comes that our help, our input, is needed.
That goes also for driving a car. Imagine an autopilot system will drive you everywhere, and then switches off when there is something really difficult on the road. Now, you haven’t been driving for weeks, and you don’t know what to do. That’s an issue.
Loney: How does it potentially impact the labor force and the workplace?
Puntoni: There is a lot of discussion now in labor economics and related fields on the impact of automation on the demand for labor, and what kind of labor. This is not my field of expertise. But it is a really interesting topic, both from a policy-making point of view but also just from a regular person point of view, where you might want to think about, what is it that I’m going to do in five or 10 years from now? The consensus seems to think that the current wave of automation stands out in the potential to automate a lot of different tasks within organizations. We’ve moved away from physical labor to cognitive labor, which is the big story of the Industrial Revolution. We ended up following the factory floor into the office, and now we are being kicked out of the office. It’s not clear what other place we can go.
But there’s also excitement, too. Take, for example, generative AI. There’s new literature springing up now, trying to understand what the impact of this technology is for productivity. Some of the early data are just stunning. Enormous increases in productivity.
My own research is not so much on demand for labor, but more about worker’s perception of what it means to deploy more technology in the workplace. We have a paper that came out three years ago where we looked at the psychological correlates of technology in employment. Meaning, does it feel different when you are replaced by a machine versus when you’re replaced by another person? What we find is that it does feel different. In fact, it feels better. You perceive replacement by a machine as being less threatening, generally, than replacement by another human worker because you don’t tend to compare yourself to an algorithm or a robot. To be replaced by another person is just more threatening because it’s not a nice comparison to make. The job went to someone else.
However, in the same paper we find that these kinds of feelings are highly contingent on your temporal focus. People do find robotic replacement to be less threatening to your sense of self. But if I asked you a different question, “What do you think about your economic future?” it switches around and now people find robotic replacement more threatening. Robotic replacement is a cue for skill obsolescence. The moment that I get fired and replaced by another marketing professor, I can go out there and try to look for another job like the one that I lost. But if I get replaced by an algorithm that becomes another professor, well, there’s never going to be another job for me. That’s very clear, the logic of these temporals.
We’re also looking at people’s stereotypes about AI or beliefs about AI, and how this can spill over to the people who somehow were connected to the AI. Imagine a company relying on AI for employee selection. What we’re finding is that people hold stereotypes about the people who get recruited through AI systems, such that if someone learns that you were hired through an AI-driven selection process, people tend to hold the belief that you have inferior interpersonal skills. They also hold the belief that you have superior analytical skills. They kind of reflect the stereotypes that we have about technology onto the people that were selected by technology. AI can be very good at selecting based on interpersonal skills, but people don’t hold that belief.
ChatGPT and the Rise of AI
Loney: Does ChatGPT play a role into that thought process of how AI as a replacement can be sometimes less threatening?
Puntoni: I think of ChatGPT and its launch in November as being a very important moment in the diffusion of AI technology in society. The technology of generative AI has been around already for a little while, but it was confined in relatively narrow corners of society. The deployment at scale of a technology like that, where we only need an email address and a three-minute registration process, makes adoption possible for everybody.
People have been so intrigued by this. I’ve heard it’s a product with the fastest adoption ever recorded. It took only five days to reach a million users. It took less than eight weeks to reach 100 million users. Everybody’s trying it and playing with it. The first experience is just amazement that an algorithm can do this. I probably speak for many of us that we didn’t think we’d see something like that in our lifetimes. And it’s happening so fast.
Of course, most people are making an account for ChatGPT, and what do they do with it? Silly things. You ask him to write your own bio, just to see what he does. Or you ask it to write a rap song about Shakespeare, or some kind of silly or gimmicky thing. But it’s very easy to understand just how pervasive the impact of this technology can be in a lot of different jobs. You can think about your own job. Many of the things that you do in a day could be done at a reasonable level by technology like this, with fairly limited prompting. It’s not as good as a lot of the stuff that we do, but it’s good enough for a lot of users. And maybe in some contexts it’s actually as good or better than what we can do ourselves.
Loney: In marketing, how will AI play a role in terms of its connection with the consumer?
Puntoni: Definitely, consumer expectations are going up very quickly. Think about chatbots. Your expectation about what a chatbot is able to do has gone up a lot in the last year or two from expecting a very rudimentary and probably unsuccessful kind of interaction to an interaction that can be pretty smooth and probably effective. I think that means that companies cannot rest on their laurels, that they keep improving. They need to deploy technology for the benefit of the customers. When they don’t do that, they’ll likely be left behind.
But there are also issues around AI safety. If the technology gets deployed at scale in society, and there’s more and more people who are interacting with it, there are also situations where we can see dangers for consumers. For example, there are a lot of media stories about people falling in love with AI, or leaving their husband because their AI advised them to. Clearly, this is potentially an issue, especially if you have consumers with mental health issues who interact with technology. You could easily see how this can become problematic.
The thing to understand here is that machine learning algorithms make predictions and respond in context. You cannot know beforehand what they’re going to say. If a consumer, for example, expresses a wish to end their own life, it’s not clear that this algorithm would say anything helpful or safe. And the person who coded this technology also cannot know. It’s not like scripted, where someone with clinical understanding of that circumstance will be able to enter safe instructions in the system. The system does what the system does, and it might not be what we need it to do. There are issues around AI safety, clearly. We’re going to see a lot of action on the regulation side, but I think it’s exciting and also scary at the same time.
Instead of Fighting the Rise of AI, How Can We Adapt?
Loney: Is it important to frame the use of AI as complementary to what we already have in society, instead of relying upon it as the next step in how to do everything in our lives?
Puntoni: I think you make a good point. And I think too much of the discussion around AI over the last few years has been “human or machine,” Meaning, we’ve been thinking about how we can take the human out of the equation. Take, for example, self-driving cars. The way that the algorithm has been working is that you have a human driving on the road with the algorithm recording everything that’s happening. And then the algorithm would learn what to do when a particular set of parameters faces the algorithm. What would a human do? It’s copying the human. This is basically how a lot of these machine learning algorithms work.
But I think we need to change gears now and go into the mindset I call “human and machine.” Not just understand how we can mimic [or replace] a human decision process, but focus more on how do we leverage the unique capabilities of algorithms and of humans, which are different and complementary, in order to improve business. It’s not about replacing the human. I think it’s about making the human more effective, more inspiring, more productive.