While workers are looking at AI to boost their productivity, so are managers. Wharton professors Matthew Bidwell and Sonny Tambe join Eric Bradlow, vice dean of Analytics at Wharton, to discuss AI’s transformation of human resources, offering guidance for managers and decision-makers on ways to leverage AI to improve the workplace while dealing with uncertainty and other rising concerns. This interview is part of a special 10-part series called “AI in Focus.”

Watch the video or read the full transcript below.

Transcript

Eric Bradlow: Welcome, everyone, to the current edition of the Wharton AI Sirius XM podcast series here on artificial intelligence. I’m Eric Bradlow, vice dean of analytics here at the Wharton School. Also, the K.P. Chao Professor of Marketing Statistics and Data Science. Today’s episode, as all of our episodes are, is sponsored by Analytics at Wharton and AI at Wharton, and today we’re going to talk about a topic that I think it’s hard to walk down the street or talk to anyone in business and not have them speak about, which is AI in human resources.

So I’m joined here today by two of my colleagues. The first is my colleague from the Management Department, Matthew Bidwell. Matthew is the Xingmei Zhang and Yongge Dai Professor, a professor in the Management Department. He’s also the faculty director of a center that’s a big part of Analytics at Wharton, Wharton People Analytics Initiative. And he’s also the academic director of Wharton’s Center for Human Resources Program. Matthew, welcome here to our show.

Matthew Bidwell: Thank you very much for bringing me on, Eric.

Bradlow: It’s great to have you here. I’m also joined by my colleague, Sonny Tambe. Sonny is an associate professor of operations, information and decisions at the Wharton School, and also teaches many of our courses on AI. So Sonny, welcome to the show.

Sonny Tambe: Thanks. Thanks for having me.

Bradlow: It’s great to have you both on such an important topic. Let me start with the beginning. Matthew, maybe I’ll start with you. Since you are the faculty director now of Wharton People Analytics, how do you think AI is going to affect the way that we manage people? What are both the concerns that you have, and equally importantly, what are the big opportunities?

Bidwell: Big question. I mean, obviously we need to think a little bit about how we define AI. These days, when we think about AI, we kind of leap straight to ChatGPT and large language models, and so on.

Bradlow: Or what a lot of people would call the generative AI part, where the computer, the large language model, is generating a response.

Bidwell: Yeah. You know, if you look back historically over the last five years, people have used AI almost to describe anything that involves numbers. So it’s a broad range. You know, I think with any of these technologies, as ever, there’s a lot of opportunities to improve how we manage people, when we look at how people are managed, so much of what goes on is kind of gut decision-making, this kind of intuition. And we have pretty much a century of research suggesting that our guts are terrible decision-makers. That actually, there’s a reason why we should be thinking with our brains rather than our stomachs. And so more broadly, when we are more systematic, when we are more thoughtful, when we rely on data in making decisions — who do we hire, who do we promote, how do we manage people — all those sorts of things — we usually make much better decisions. And so I think the extent to which AI helps us be more systematic in doing that, it’s going to be really helpful. It’s already being helpful.

There are obviously big concerns. I think three spring to mind. One big concern everybody has is bias and discrimination. Again, we know there’s a lot of bias in the labor market. We know our guts are discriminating all of the time. The good news is, probably most of the time AI is going to be less discriminatory. But we think it’s going to be discriminatory. Particularly when we look at some of these more sophisticated large language models, right? They have been trained on the corpus of data that is out on the internet. Even when people aren’t being deliberately sexist and racist, that embodies a whole set of cultural assumptions. So take sexism, for example. There’s been some very nice studies that look at word imbedding models and other things that are trained on the corpus of text that you see on the internet, and they show, not surprisingly, that we think words to do with careers are more closely related to men’s names, and words to do with home life and family are more closely related to women’s names. And so once you start using those models to make decisions about employment, I think the risks of bias and discrimination are very serious.

And I think one of the things we worry about particularly more generally with using algorithms rather than judgment in managing people is, if you have a manager that discriminates, that’s a problem for the people working for that manager. If you have an algorithm that discriminates, the fact that I can apply a hiring algorithm at scale across an entire company, across an entire industry, the sheer volume of people that are potentially affected is huge.

Bradlow: That’s one of the advantages of scale and the disadvantages of scale.

Bidwell: Yeah. And so I think that is a very live concern. If I can go on a little bit, just talk about my other two concerns — I know we have many other questions. But I think another couple of things that we’re thinking about. Algorithms, like any technology, have often been applied in a fairly punitive way in HR. So I think the classic example of this is scheduling software. And so with scheduling software, it’s very tempting. If you’re an engineer, sitting in your office trying to do the right thing, you’re like, “How do I increase productivity?” And the way I increase productivity is by carefully matching people’s schedules to shifts in demand during the day. And so I’m running Starbucks. I want to give somebody a shift that starts at seven in the morning and then runs by ten. Well, by then we’re through with the kind of office rush. And then I want them to go away for a while, so I don’t have to pay them. And then maybe I want them to come back between four and six.

And so what you find is, these schedules end up creating schedules that are great for the company but have proved terribly damaging for the people who actually have to try and fit their lives around what the algorithm thinks, and probably, frankly, end up causing long-term damage in the organization as well because you maximize that match between supply and demand, but you end up driving up attrition, as people won’t stay with those sorts of schedules.

And so I think there’s a broader issue. It’s always a tension in managing people. How much do you take into account what those people think? But I think when you have people managed by AI, you have a bunch of assumptions that are being baked in by the schedulers, by the engineers, whoever. They’re increasingly detached from what’s going on in the ground. And I think that that often leads to some really bad and destructive decisions.

And so I think done well, we can incorporate a lot of these algorithms and manage people better. But it does require us to really think about how these algorithms are being used, and to have kind of that closed loop. So we engineer something, and then we say, “Okay. What’s actually happening?” And we’re very alive to the problems it’s creating and go back and reengineer it. I think when you kind of just sit down, do an optimalization problem, and then kind of put it out into the world and let everybody suffer the problems, that creates a lot of damage, too. So those are some of the things I’m worrying about.

Bradlow: I think one of the things that we always talk about is that, first of all, what is the objective function you’re optimizing? That’s the first thing. And I think we would all agree — and I’ll turn it over to Sonny in just a second — would be, you know, these things should be a decision support tool. The minute you automate them, you have those dangers of, you know, if you’d like, maximizing some objective that may not be good for the employees, and certainly may not be good for the firm. So Sonny, let me turn things over to you. I know for a number of years, you’ve been one of our pioneers in teaching AI to our students. What’s changed? Why is everyone so excited today? I’m a statistician. I’ve been here at Wharton for 28 years. We’ve been doing big data science for a long time. What’s unique, and what’s changed about today, that’s made everybody want to take your class, everybody be interested in every single thing you’re working on?

Tambe: I think you touched on it just a second ago. But this tension between decision [and] support, which is what a lot of technology has been doing for a few generations now, moving to this world of potentially either recommending a decision or even automating decisions. And what we do at work all day is make decisions, right? That’s what businesses do. It’s what organizations are optimized to do. And so a technology that can make decisions or recommend decisions has implications for all parts of the organization. People compare it to electricity. It has the potential to change everything. I think that’s one part of the reason people are energized about this particular topic.

The other thing that’s exciting but also somewhat concerning is that I think there’s more unpredictability right now around AI than there has been for tools past, right? So we think about, as we scale up these models, people are seeing more emerging capabilities that they would not have expected, right?

Bradlow: Let me press you on just this one topic for a second. I can imagine uncertainty in a few things. One is, I type something into a large language model. ChatGPT, Bing AI, et cetera. Something comes out. I type the same thing in, maybe the same thing doesn’t come out. So we could call that in the measurement literature “test-retest reliability.” That’s one possibility. One is, Sonny Tambe changes one word in the prompt. Prompt engineering, if you’d like. Something radically different comes out. What form of uncertainty are you talking about? Or maybe it’s the measurement one that I think about, or maybe it’s a different one.

Tambe: Right. No, when I think about uncertainty in this context, I’m thinking about the question of where these technologies can add value to the jobs we do. Right? So if I, as a person who designs jobs, or an organizational planner thinks about where it fits in, the answer is changing in ways that I think are a little bit unpredictable. And so if we think about what it can do now, what it can do tomorrow, even the people who are at the frontier of this technology find themselves quite surprised these days. “We didn’t think it was going to be able to do that.” And so that kind of uncertainty, combined with the fact that it has the potential to affect decisions everywhere, have a lot of potential for change in ways that we don’t exactly know is coming, but is energizing, in a way.

Bradlow: Sonny, maybe you could just clarify something for me and for all of our listeners here on our show. The things that generative AI models can do. The algorithm itself, the statistical engine itself, doesn’t just come up with it. Right? Somebody has to have programmed it to be able to do a certain type of problem. It’s not like it generates solutions to problems it just generates. Let’s say you wanted your AI engine to make some decision about how to optimally schedule something, which Matthew said. Somebody, a programmer somewhere, had to have said, “This is a problem this AI engine should solve.” It’s not like the AI engine searched around the world and said, “Let’s solve timecard scheduling problems.” The algorithm doesn’t decide the problems. Humans help the algorithm decide which problems to solve, right?

Tambe: Humans on the input side absolutely do help to understand what problems to solve. At the same time, they’re incredibly general-purpose. So what they’re capable and flexible enough to solve is quite impressive.

Bradlow: Yeah. I know, Matthew, you wanted to jump in here and talk about this idea of the breadth of problems, and what the future might be.

Bidwell: Well, I think on the uncertainty piece, it’s very interesting. Sonny knows much more about this than I do, just for anybody listening at home, so I’m mainly curious to hear what he thinks. But it strikes me a lot of the uncertainty is how good they’re going to get, how quickly. I mean, I think you said we’re all talking about it, and everyone wants to talk about it. It’s partly because we’re all so surprised by just the leap in capabilities over the last year of these models. Just blowing through the Turing test in a way that we thought was a long time off.

But the big question is, have we now reached another kind of plateau? At which point you kind of say, “These are neat tricks, and they can do some things quite well. But the lack of accuracy, the hallucinations, all of those sorts of things, are we ready to turn over large processes to them wholesale?” I’m not sure. Or, they’re really good at improving?

And the thing it makes me think about is self-driving cars. I mean, I remember seven or eight years ago, when we were thinking about buying our next car, I thought, “This is the last car I’ll ever buy. Because by the time I’m ready to buy another car, all cars will be self-driving. Three will be no steering wheels. Why would we have them?” And it turned out, you could get 90 percent of the way, but that last 10 percent proved really hard.

Bradlow: That’s a great point.

Bidwell: And so the question in my mind is, could we see something similar? Or do we think, really, they are going to keep improving at this rate?

Tambe: Yeah, no. Absolutely. I 100 percent agree. The challenges of all these tools are that— and you mentioned this, I think, a little bit when you talked about bias [and] discrimination in managers. All of these tools are invented in a context where we have, I don’t know, 200, 250 years of infrastructure about what to do when a manager gets it wrong. We understand how to deal with human decision error. Any time you’re talking about putting one of these tools in place, which includes self-driving cars, it includes large language models, and it gets it wrong, we don’t have that legal, organizational infrastructure. And that has been a constraint, and I suspect it will continue to be.

So we may well have hit a plateau. It’s just that whenever about what the future is going to bring with these tools — if I think about the history of science a little bit, this is somewhat rare in that you have a situation where you have a tool that can do certain things, and the scientists are now trying to figure out how it works. That hasn’t happened very often in history. And so this uncertainty is why I tend to caveat some of those comments, because of that uncertainty. Which is new, I think, when you compare it to other technological innovation.

Bradlow: Let me ask you both next about what I’ll call application areas that you think are — let’s be possible people — extremely positive. So for example, one area that Matthew, you already mentioned, was about hiring. Let me ask you the following, if the following would be a good example or a bad example. I’m even going back to my days— prior to coming to Wharton, you two may not know this, I was at the Educational Testing Service in Princeton. And we were working on automated scoring algorithms for essays. A long time ago. People may not have called it AI, but we were ingesting the words and trying to construct scores. But not for the purpose that when Matthew Bidwell takes the SAT, that’s the score he’s going to get. But, how do I use humans in a most efficient way? When I have millions of essays to score, how can I use an engine to basically do a first-pass algorithm, and then humans will come in on the really tough ones?

So let’s take hiring as an example, but you use any example you want. Why can’t I use an AI engine to do a first-pass algorithm, to look at 1000 resumes that I get for a job. Nine hundred and fifty get pruned off by the algorithm, and then the 50 that seem to have the credentials that match what I want, I then use humans and intervention to go in. Do you have any concerns about that two-step process? Now, bias and discrimination can happen by the algorithm, so maybe there’s some in that 950. But what do you think about that? And also, if you have a better example than mine, I’m sure our listeners would love to hear it.

Bidwell: That’s great. I would totally do that. And I actually— I mean, there have been some experiments with this. Generally, they’ve worked out reasonably well. I always say it’s not so much that I’m a huge AI optimist. I’m just human skeptic. So I think the challenges when you look at people, how people actually make hiring decisions, it’s so haphazard, that hiring actually I think is one of the places where this tends to work really well.

Now, I mean, we can get back to that question about, is this in support? Is it actually making the decision? I mean, one thing, just to be aware of, is I think in practice that is a very fine line. I think most of the evidence is people tend to do what they’re told. And so frankly, if the algorithm says, “You should hire this person,” most of the time that’s what they’re going to do. And so you can’t— when we say, “Oh, well, there’s a human in the loop, so it’s okay.” Yeah, but are they really? I mean, once they’ve been told this is the right way to do it, they’re going to largely follow the advice.

But yes. Actually, I think that is one of the better places. I think there are concerns about bias. My guess is, in the vast majority of these cases, the bias of the AI is still orders of magnitude less than the biases of the human rater. So I’m reasonably bullish on that as a case.

Bradlow: Sonny, what’s your thought on both the example I gave, Matthew just talked about, and maybe, what do you think is the most promising area where AI can have a transformative effect on human resources today?

Tambe: I want to underscore and pick up on something you said, which is let’s be positive people. I think the framing of the conversation in the broader press, maybe, has been too much zero-sum, with employers and employees or managers and workers. And there may certainly be some of that. But it seems that there’s a lot of opportunity to use these tools in a way that enhances employee experience. Enhances employee well-being.

I was talking to an executive last week that’s using generative AI to write performance reviews. And first of all, it saves him a ton of time. But the second thing is that he’s able to use that excess time to do one-on-one mentoring and coaching. So there are a lot of opportunities to maybe improve. And they also, by the way, provide more frequent performance reviews. So it’s almost on a monthly basis, instead of biannual. So, lots of opportunities where we can think about the employee. A lot of these tools, what they’re doing is, they’re taking parts of the work that we may not enjoy so much. There are definitely places where we can think about using AI, at least as a first-order application, to think about, how can work be better? How can we do a higher-touch, better job in making sure our employees stick around, are happy, and are being productive?

Bradlow: Matthew, let me just ask you. One of the things I love doing with my MBA students— actually, I’ve been doing this for years now. I always start out one of the lectures, and I say, “AI,” or in that case “Machine learning is coming for your job.” Which kinds of industries, or which areas of the workforce, do you see— in some sense, if you were advising our MBAs or undergrads, “I don’t know. This seems like a pretty risky area to invest in today as a career.” Any in particular jump out at you in thinking, “Wow, I don’t know.” Like for example, I’ll pick my home department. If I was in the creative marketing business today, coming up with advertisements, I’d be thinking, “I don’t know. Seems like AI engines could do a pretty good job of coming up with the massive combination of features of ads that seem to be effective.” I’ll just pick one from my home department of marketing. Anything come to your mind?

Bidwell: I’m nervous about this. I mean, I’ve chatted with Sonny about this before. We’ve had two decades of people making predictions about what work is going to go away based on AI. And in retrospect, they’ve mainly been hilariously wrong. So I kind of feel like this scenario, it’s very hard to predict. We’ve recently seen these things saying, when we look at which jobs are going to be most affected by these new AI technologies, things like English teachers are at the top of the list. And you’re just like, no. No. I mean, if I think about which jobs are likely to be safest from AI, I cannot see ChatGPT maintaining control in a class of 14-year-olds. It’s just not going to happen.

So I think it’s very hard. I mean, we are seeing, you mentioned creatives. I think freelance graphic designers have already really taken a big hit. So we’re seeing some jobs. Essay mills. So if you made your money by ghostwriting papers for college students, I’ve got really bad news for you. Stack Overflow has just been laying off people, providing advice for programs. But we’re seeing these narrow, slightly strange niches getting wiped out. But yeah, I’m nervous about making big predictions about what’s going to be effective.

Bradlow: Any thoughts on that?

Tambe: Yeah, I’m with Matthew on this. I’m relatively optimistic, in the sense that you would expect to see some verticals affected quite a bit. Maybe customer-service operations, or customer-facing operations. But that’s been true of tractors and Xerox copiers and everything in between. By and large, the evidence seems to be saying, for large language models, for example, that all of us are going to be using them to some degree, to make us a little bit more productive. But the productivity gains will be promising, but they’ll be gradual. And so we’ll be able to maybe get rid of some parts of our job we don’t like, become a little bit more productive. And any job loss will be at a pace that the economy hopefully will be able to absorb it without any problems.

Bidwell: Yeah. I mean, I do think if you had predicted, when the internet came in, that one of the occupations that would be worst affected was journalists, you know, we’d have asked you to show your working. Right? I mean, there are kind of— it’s quite unpredictable, how these things play out.

Bradlow: Another question I’m sure our listeners here on Sirius XM and our podcasts would like to know about is, how are you two, as educators, using it in your own classes? Like, for example, are you going to allow students to submit assignments using generative AI? Are you going to encourage its use? Are you going to take certain parts of the material that you’re teaching students, and say, “Actually, you’d be better off just learning it through a generative AI engine?” Matthew, I’ll start with you, and then I’ll go to Sonny, who— one could argue, your entire course is about this. So I’d like to start with you, Matthew. How are you going to use it in the courses you teach?

Bidwell: It’s still a work in progress, I have to say. I mean, so I teach a class on people analytics. Part of that, I get people to analyze data sets. I’ve tried throwing my problem sets into ChatGPT. It’s made some fairly elementary errors, which has made me reassured that it’s not going to make me completely redundant. But I think I will be encouraging my students that that is a way to work on it, but that they need to understand what the answers are. That just expecting ChatGPT to get it right is going to lead them astray. I think it’s a big problem, I think, for us, though. Particularly in the Management Department. You know, a lot of the way in the social sciences we have tended to evaluate people and get them to learn is, “Go write a paper.” And it’s going to take us a while to figure out how to redo pedagogy, when that is just so easy to get an AI to do. So we’re one of the industries, actually, that I think is most affected, in some ways. We’re not going to lose our jobs, but we’re going to really have to— I hope. But we’re really going to have to change how we do what we do.

Bradlow: And Sonny, in your answer I’d love to hear, since I know at least in one of the two courses I’m well aware of that you teach, there’s actually a significant coding portion. So if you could talk— I’ve heard some people say it doesn’t matter, therefore, whether you know R or Python, because you can do the conversion back and forth. Therefore, you’ve got to be able to program in something, and we’ll let ChatGPT do the rest. But how are you thinking about it?

Tambe: Yeah, absolutely. I’m somewhat fortunate, in a way, because AI and analytics are so central to the courses I teach that these questions, I can just move them directly to the center. And I think what you said is absolutely right. It’s first order, these days, for students to understand how you think about a coding workflow that involves large language models, right? So, what changes? Where does the time go? How much coding do you need to be able to know to use this effectively? These are all questions we don’t quite know the answer to, but I think belong in the center of these types of courses.

And then another course I teach on AI asks some of the bigger questions. The questions Matthew raised. Bias, ethics, those sorts of things that we’re just not quite prepared for yet, but that managers are absolutely going to have to deal with over the next two decades or so. Those are also central to how we spend our class time, and there’s so many emerging questions every year, or new questions. There’s never enough time.

Bradlow: Maybe in the last minute or so that we have, I’ll ask you each for a 15-second answer. I’m an employee. What do I need to know about AI that’s going to help me do my job better? What’s the one thing I should know how to do, as an employee? Matthew, any thoughts?

Bidwell: Experiment, I think. Basically, just try things. Play with the technology. Get online and see where it can take over parts of your job and make you more effective.

Bradlow: Sonny?

Tambe: I would say, be prepared to embrace change. We’re just entering a period where I think the way we do functions and operations and business processes is going to start to change quite rapidly. And from an employee’s perspective, I think mentally they should have that mindset that I need to stay on top of how these things are changing.

Bradlow: Well, on behalf of Analytics at Wharton and AI at Wharton, I’d like to think my colleagues Matthew Bidwell and Sonny Tamble for our episode here on AI in human resources. We’re going to have a sequence of these, and I’m very excited to bring this content to everyone here on Sirius XM. Thank you for joining us.

Bidwell: Thanks, Eric.

Tambe: Thank you.