Generative AI promises productivity and cost-cutting gains, but it also has the potential to increase employee well-being. That’s why Wharton’s Stefano Puntoni wants companies to put their workers at the center of the AI conversation. This episode is part of the “Research Spotlight” series.

Transcript

The Psychological Impact of Gen AI

Dan Loney: Not only will generative AI impact the work that we do on a daily basis, there is a belief that it could impact how we think about work. And it could lead to threats towards how we go about work on a daily basis. This is the genesis of some research done by a group of professors, including our next guest, Stefano Puntoni, who is a professor of marketing here at the Wharton School.

Stefano, this is one of the many areas that we continue to see evolving right now. How is AI impacting so many different aspects of our work?

Stefano Puntoni: I’m a consumer researcher, so a lot of my research is on the user of the technology, the consumer. But I think one of the biggest areas of interest for companies thinking about AI deployment is about the impact on employees. Because obviously, this technology is going to be useful only to the extent that people are going to use it. Understanding adoption patterns and psychological reactions to AI tools is going to be very important to understand the impact of AI programs.

Loney: We’re talking about this in the scope of the employee. But how important is it for the employer to recognize this as they’re putting a lot of these processes in place?

Puntoni: Almost every company today has some kind of AI deployment plan, where basically they are talking to tech vendors or consultants, or doing in-house. They develop some kind of tool. It might be something like a generative AI engine, like a ChatGPT type, or a corporate version of it. The idea will be that this technology has the promise of accelerating innovation, accelerating productivity, making firms faster and better at what they do.

But this technology is really only going to have an impact to the extent that employees, the people who do the work in organizations, find a way of using it, find a way to integrate these tools into their work flows. Integrating it with their competence and expertise. That requires a lot of changes to the way we think about work, and often times companies are not thinking enough about the psychological aspect.

I advise companies who are interested in deploying AI at scale within their organization to have almost two parallel tracks. One is a tech track, where you’re working with your technology teams and outside vendors to deploy solutions that work. You have a lot of concerns about data safety and compliance and performance and benchmarks and all of that. But then at the same time, you also need to marry that technical effort with the management and leadership effort, which is targeted at employees to understand and explain, what are we doing? Why are we doing it? What’s in it for the employee? Is this going to actually be a threat to their career and livelihood, or is it going to be benefitting them in some way, and how? And how can you do that with an authentic voice?

I think it’s important to have both going at the same time. If you do only the technical stuff but you drop the ball on the communication and leadership piece, I think you cannot expect very good results.

Does AI Threaten Our Well-being?

Loney: What are some of these threats that you believe will come forward here?

Puntoni: In our paper, we basically adopt a very famous psychological theory that we find useful to help organize our thoughts in this area. We say that psychological well-being is really a function of experiencing feelings of competence, of autonomy, and of relatedness. These are the components of the self-determination theory, a theory going back to the ‘80s. So it’s been around for a long time.

These are three important antecedents of psychological well-being. We basically argue that gen AI can have important benefits for all of those. It can make you feel more competent, when all of a sudden you’re able to do things that you couldn’t do on your own before. Because gen AI makes it possible, for example, to do advanced analytics using natural language. It can be empowering, so it can give you feelings of autonomy when you realize that now you can do this. There is a sense of being independent, and not being able to rely on others. And it can help relatedness when these chatbots are creating this seamless parasocial experiences and can embed themself into a team or work flow.

There are these benefits. But at the same time, gen AI can also be a threat to all of this, can be a threat to competence. All of these discussions about jobs. All of a sudden, people are wondering about the value of their skills. It can be a threat to autonomy, because now they feel that they have to adopt these tools and are no longer in control of their work flow. They have to delegate to these AI systems. And it can be a threat to relatedness, when you feel alienated from your team or from the company because you feel this has been deployed in a way that is threatening to you.

Loney: Isn’t it a fine line between the two, in terms of the impact that an employee could feel?

Puntoni: Yeah. I think the potential is enormous for boosting psychological well-being, productivity, and performance. But the reality is that in many organizations, the conversation is not really oriented toward the psychological well-being and career advancements of the employees who start to use this technology. A lot of the conversations in business around gen AI are about cost-cutting, about productivity increases to the detriment of head count. And those conversations are clearly threatening to people. You cannot expect people to hear this stuff and think, “Yeah, that’s fine by me.” It seems like, to me, there’s a lot of potential for boosting psychological well-being of our employees. But in practice, the way that lots of conversations are going, they’re pointing in exactly the opposite direction.

Loney: Is there an element of generational understanding in this mix here? The reaction of somebody who might be in his 40s or 50s and dealing with AI might very well be different from somebody who’s in their 20s or 30s and is much more digitally savvy?

Puntoni: One needs to understand the situation of the person to be able to make some predictions as to what people are going to be finding psychologically threatening. Age is an obvious dimension, function, maybe. Also, role of tasks. There might be more. When it comes to age, what’s interesting about is that one the one hand, we know based on most of the research on technology adoption, that younger people tend to be faster and more keen on new technologies than older people.

But in this case, there’s also a lot to be worried about in terms of junior positions. Because what we see is that gen AI is being adopted in a way that often times looks like an AI intern. A lot of entry-level positions are now being kind of prioritized as something that you can do with AI, which might provide a greater actual threat to the employment of younger people, more than older people. There is even some evidence that AI investments are slowing down career trajectories for younger people while accelerating those for older people who are already more senior in the organization. It’s not obvious which way things are going to go.

How Are Workers Adapting to Gen AI?

Loney: You also have to think about the persona of the individual. Each person is going to react differently to a lot of these components.

Puntoni: Yeah, absolutely. In the paper, first we start by sketching these psychological threats that can emerge from AI deployment efforts. Like I said, there are three broad categories of threats to competence, threats to autonomy, and threats to relatedness. Then in the second part of the paper, we’re asking that question you are just asking. “What kind of reactions can we expect people to engage in if they feel a threat?” We are building our literature on coping. We argue that five key reactions will be especially common. And they vary in the extent to which they are positive or adaptive, and the extent to which they are negative for the organization and for the employee.

There is one we call direct resolution. You feel a threat to, for example, your competence, and you decide to upskill yourself. You sign up to a prompt engineering course to be able to be a proficient user of gen AI. That is tackling the threat directly to solve it and become a proficient user and benefit from it.

The second one, we talk about the symbolics of completion. That strategy is one where the employee is reminding themselves and others, and underlining the role of human judgment. For example, you can imagine a consultant who, in the course of a presentation, underlines the human insights that are brought in.

Then there is dissociation, where you are trying to move away from gen AI tools or gen AI jobs. For example, a graphic designer might rediscover old-fashioned techniques. As an element of this, there might be a component of sabotaging, trying to behave in a way that makes the AI fail. That’s obviously not good for the company’s effort to benefit from gen AI.

Then you have escapism, which is basically disengagement. I now can have gen AI doing all this work, and I’m spending all my time scrolling on the phone. Clearly not good either. And then you have one called forward fluid compensation, which is trying to access what’s AI doing well, and what is it not doing so well? And then pivot a little bit. Recalibrate your activity and your skills towards the areas where you feel AI is falling short. That’s a more adaptive one.

Loney: There is a bit of fluidity to AI right now in terms of how it’s being implemented. How may we see changes occur to better adapt AI to specific businesses as we move forward?

Puntoni: The technology is changing really fast, so it’s very difficult for people to feel sure footing. In fact, I believe that one source of threat for employees is precisely the pace of change. People feel things are moving so fast, I can never catch up, and everybody’s lagging behind. I hear many organizations saying, “We are one year behind.” But obviously if everybody is one year behind, nobody’s behind.

It’s this feeling of a bit of FOMO. Never knowing what’s next, and the next new gadget or whatever. That is kind of destabilizing, by definition. As these capabilities change, it is difficult to say, “I should be investing in this.” Like two years ago, everybody was talking about prompt engineering. But increasingly, prompt engineering is being embedded within the systems that are getting more and more sophisticated, for example, with the reasoning models. Some of these principles might not be worth all that much already.

To what extent can we count, for example, on the technology not acquiring certain capabilities that right now seem safe from the point of view of an employee? It’s difficult to say, so I think there’s a lot of uncertainty. And that uncertainty is actually a part of the problem.

Loney: What do you and your colleagues take from this research that’s most important for both companies and employees to truly understand?

Puntoni: To me, the bigger picture here is that this technology is quite different from other previous waves of digital transformation. If you look back 10 years ago to the cloud computing revolution, what companies were doing were major investments and big risk-taking in saying, “We’re shifting everything away from our services onto the cloud. We transform our IT functioning to pay-per-use self-service, and we’re going to get our IT needs satisfied that way.”

Now, that was a big change, a big deal. But if you think about the user of computers, imaging working in a company writing an email. Whether the email sits on your desktop or is up in the cloud, you write the same email. It doesn’t necessarily require the organization to change the way that people work. It’s a decision that will be made by the CTO and the CFO, together with the leadership, to say, “Is it good or bad?” It’s a go, no-go decision. You pull the trigger and you do it, and hopefully it works out. But once it’s deployed, you don’t have to teach anything. A little bit, but not much.

With this technology, because we are using it to outsource cognitive labor, this technology is only going to be productive to the extent that people find clever ways of bending it into a work flow. For that, you need the people in the function to do the work of integrating it. It requires now the whole organization to get onboard, so it’s a much bigger, harder change management particular.

As an academic outside business, what I’m lamenting a little bit is that so many of the conversations in the media proclaim it. A proclamation from CEOs for investor relations or trying to boost the share price. You heard Klarna recently, or Duolingo, or whatever. But basically, they emphasize headcount reduction. And of course, companies operating in a competitive environment, they ought to find the efficiencies that they can find. But if the only thing we can find around AI is, what do we do in order to fire people — I mean, that’s not inspiring to anybody in the organization. It’s only a threat.

I think we need to have a conversation that is also trying to bring employees into the picture and say, “What’s in it for them?” How do you use gen AI to make them more successful, to capitalize on their expertise? To elevate their status, accelerate their career, or maybe even simply give them an afternoon off if they can be more productive. You know, do something for them. And I think that conversation is often missing.