Adopting new technologies like AI and robots can drastically shift an organization’s processes from top to bottom. Wharton professor Lynn Wu joins Eric Bradlow, vice dean of Analytics at Wharton, to discuss how automation and artificial intelligence impact physical labor, productivity, and human resources. This interview is part of a special 10-part series called “AI in Focus.”

Watch the video or read the full transcript below.

Transcript

Eric Bradlow: Welcome, everyone, to the Wharton Sirius XM podcast series on artificial intelligence, sponsored by Analytics at Wharton, and AI at Wharton. I’m Eric Bradlow, professor of marketing, statistics and data science here at the Wharton School. I’m also the vice dean of Analytics at Wharton. I’m here interviewing thought leaders at the Wharton School on the impact of AI in business. And of course, today’s episode, which is part of our multi-part series, is no exception. We’re going to be talking about AI in robotics, or if you like, AI in automation.

I’m going to be talking to my colleague, Lynn Wu. Lynn is an associate professor in our operations, information and decisions department here at the Wharton School. She teaches everyone — undergrads, MBAs, and Ph.D. classes — about the use and impact of emerging technologies. Lynn, welcome to our podcast.

Wu: Thank you for having me.

Bradlow: It’s great to be here with you. One of the things I mentioned to you briefly off-air is that a lot of people might be listening to this podcast and saying, “Wait a second. I thought AI was just generative AI. That’s what I’m hearing about today. I’m hearing about ChatGPT, Bard, and Open AI. What the hell does AI have to do with automation?” So I thought maybe for a few minutes it would be good if you take our listeners through a history of artificial intelligence. What does it mean to have artificial intelligence? And then, how generative AI fits into the broader class of AI.

Wu: Oh, thank you. That’s a great question. AI as a field has been existing for decades, from the 1950s and ‘60s — that’s when people coined the term “artificial intelligence”. And probably if you were around in the ‘70s and ‘80s, you probably know about “expert systems,” which is a bunch of rules, if/then clauses. “If you see A, then B happens. And I think that will code out lots of medical knowledge, or other type of knowledge out there.” And that didn’t work so well.

And then come the neural networks. The neural networks happened around the ‘70s and ‘80s again. At the time, it wasn’t performing very well. Because as you know, neural network needs a lot of data. We come to the ‘90s. Actually, we had an AI winter. Basically, neural net wasn’t working. A lot of AI techniques weren’t working. So they actually thought AI is not going to happen for a long time.

And then we had the internet explosion. We have all of the digital trace of our internet activities, our search activities, our social media, our videos, our photos. And that data explosion ultimately fueled the current AI revolution. Because now, the neural network has a ton of data. And then, you can build up very, very big neural networks, which we call deep learning. Deep learning was driving the AI revolution from the 2010s to about 2018. And then a special type of AI techniques called transformers — which basically, if you think about it, make AI neural networks run rally fast. Computationally very efficient. And again, saves a ton of resources. And that allowed generative AI.

So although I’m really simplifying things, that’s the gist of what it is. So if you think of generative AI as only the tip of the very end. And although it’s transformative. I mean, I absolutely agree with you. If you see ChatGPT, and DALL-E is amazing. But it’s really just an extension of the existing technology in neural networks and deep learning.

Bradlow: Yeah. It’s just, in this case, predicting it to language. So these are large language models. But as we’re going to talk about today, there are applications of AI in many, many different areas. And also, I think it’s also valuable for our listeners to know. A lot of us trained back in the old days as statisticians. But even all of us are familiars with regression, which is obviously a form of linear prediction models. And neural nets, obviously, and deep learning, just allow for much more complicated interactions between variables. Maybe you could just comment briefly on this. That’s why these simple rules tend not to work well. Because the way real life works isn’t just simple “if/thens.” It’s not like a linear regression, more is necessarily better or worse. There are these complicated relationships.

Wu: That’s exactly right. Often, we cannot describe a relationship  using a simple representation or a linear model. What neural network does is, you can have a variety of different functional forms, in a way that we cannot describe in any way at all. But the machine knows how to transform input to output through many different layers of transformations.

Bradlow: Well, let’s actually talk about the main topic. And first of all, thanks, Lynn, for bringing that up. Because again, a big part of this series — a lot of people are talking about generative AI. And you’ve just pointed out, it is just a special case and a specific application area. So let’s talk about this. As a matter of fact, one of the things I present to my MBA students — which is interesting, that it’s something that you wrote down in advance as a question. Previous studies have indicated that between 40 to 70 percent of jobs could be automated. Let’s start with, which jobs do you think could be automated? And two, what would prevent that for happening? Or there is no way back, at this point?

Wu: We’re talking about generative AI specifically?

Bradlow: It could be. Or it could be, for example, now automation and artificial intelligence is going to allow robots to do lots of physical tasks that we couldn’t do before. So it could be vision-based types of jobs and opportunities. It could be robots doing surgery. It could be robots replacing people in plants. It could be in lots of different ways.

Wu: Okay, great. So let me be more concrete and talk about robotics. So, how physical robots can change labor composition, right? Contrary to the population notion that 40 to 60 percent of employees are actually getting laid out and have no jobs, in our longitudinal research over about 20 years using Canadian data where we capture every single robot being used in the firm or in an establishment, and look at what kind of employees are working with the robots, who are there and who are not here anymore and the revenues, their firm practices in terms of HR management, in terms of how people are being rewarded — what we found is, interestingly, when firms adopt robots, they actually hired more people. They did not lay off more people.

Bradlow: Very interesting. Why is that?

Wu: It actually turns out, it’s the robot adopters who are becoming much more productive and much more efficient. So they grow their pie bigger and hire more people. It turns out it’s not the robot displacing people directly. It’s the robot non-adopters who are no longer competitive, and they are laying off people.

Bradlow: Okay. As a statistician, this is an interesting what I’ll call “self-selection” problem. The fact that someone has adopted robotics is indicative of the likely growth pattern of the firms. And faster-growing firms tend to hire more people. From a selection bias story, is that the selection argument that’s being made?

Wu: That’s a very good point. We actually looked at the type of people that are being laid off, right?

Bradlow: That was going to be my next question.

Wu: Right. So I can talk about statistics and the methods we did. But I think it’s really easier and intuitive to understand.

Bradlow: But let me just understand. If I hire a robot — if I build a robot, or utilize a robot to do a specific type of task, are you suggesting that the firm hires more people to do that type of task? Or the firm hires more people in general to do other tasks, because now this task is done more efficiently. This firm can produce more units. And now, I need more people in other jobs.

Wu: It’s really the latter. Right.

Bradlow: The latter. Okay.

Wu: But no one is doing everything. Robots are not doing everything a human can do, yet. So robots are replacing part of a task a human worker used to do. But increase the demand of the other tasks that that person used to do. So if you are a manager, and you mostly are looking at quality assurance issues and that robot’s taking care of that, you can do more leadership. You can do more people management. You do other work that robots can’t do yet.

Bradlow: I love it when people find things that are counter-intuitive. Your paper finds that the case for employment is reversed for managers. That is, more robots can equal fewer bosses. Tell us about that. Because that must have been one of the, to me, intriguing findings. Because I think most people would say, “No, it’s the lower-level laborer that’s going to lose their job. But managers, someone has to make decisions. You’re fine.”

Wu: Yeah. That’s also a very intriguing finding. We thought we got it wrong, at first. So we really double checked, triple, quadruple checked. And then we found out why. It’s two effects. One is a direct effect. I mentioned the monitoring technology can figure out what your employees are doing, right? Because if you think about a laptop. You can even figure out every key the employee can type, if you want to. Right? And in a warehouse situation, you’ve clocked in each box coming in. They know exactly what you’ve done, so you don’t need a foreman or a person who will see you actually doing the work, because there’s all automatic captured. And then your performance reports are automatically generated. So that part of the managerial tasks have been directly substituted.

So you see in the very beginning, within zero to one year after adoption, you see a slight dip in managerial work. But then after two or three years, you see a really big downturn, a very sharp downturn. Why is that? That’s no longer a direct substitution story anymore. What happened is that the composition of labor, the employees at the firm, changed. So, you think about what robots can do. It turns out, robots are really good at doing the middle-skilled work part. They’re not very good at doing thought leadership, thinking about what to do. That’s what humans are doing. We need to figure out our goals. We need to figure out what to do, and execute it, and write a plan to execute it.

But robots are still not very good at some of what our hands can do. The residual tasks. Somebody still needs to pick up a ball and put it in a box. And various shapes and forms. Our hands can handle infinite shapes, where a robot, you’ve got to train a lot of time to handle one type of shape. So that versatility, although people are working on making a lot of advances in this area — the versatility of the robots is not there yet. You can train a very narrow task, but you cannot generalize very quickly to lots of tasks.

Bradlow: I see. And do you think, if you like, the explosion of transformer models — so, the ability to handle richer and bigger and larger data sets in real time — is even going to threaten that? That in some sense, robots are going to be better at very — let’s call them fine motor tasks, and complex tasks? And it’s just a matter of time, even before — let’s call it “the pie” expands to even more fine motor types of jobs?

Wu: People are definitely working on that. But I do want to caution you that for the robotics area, the data generation is just not there as AI. Because in AI, you can simulate lots of different kinds of datas. Right? Your pictures, your video is automatically there on the internet. You can just grab it. But if you want to train a robot, you’ve got to train them in realistic settings, when humans are involved.

Bradlow: So why can’t you have — by the way, the term — you might call it a form of “supervised learning”. Why can’t you have a bunch of humans doing a task. I actually have video cameras with extraordinary — maybe the person has sensors all over her-his-their body. And now all of a sudden I’ve got this extraordinarily rich data set of movements and actions. And I’m going to use AI to train the model to start using robotics to replace those tasks. Is that because companies are not doing these types of — I’ll call it “measurement experiments” in large scale? Or, why can’t that be done?

Wu: I think you can do that. But again, only if you have a very narrow set of tasks. Unless that task has extremely high value to the firm, that makes sense. Right? And also, most of the robots, you can’t just plug in a robot in a firm and then expect it to be autonomously working on his own. So there is always a human monitoring, or working on the robots. Because that’s like cobots. A lot of the new development in using more advanced techniques are cobots.

And these cobots, it’s usually human-machine interaction. In those scenarios, you actually have to generate human-robot interactive data. And that’s something you cannot just simulate. You actually have to capture it. Right? You can put a sensor all over me, or robots. But you have to catch me everywhere. And that’s just hard to capture. You can’t just put a car on the road and capture all this stuff. When a human’s involved, it’s expensive, right? Capturing that data is really expensive. So in terms of that data, I always say robotics are at least ten years behind in AI, in generating the techniques that can do what generative AI can do.

Bradlow: I see. How should firms think about their employees now, or how should employees think about the skill sets they need to develop to continue to thrive in a world where I think we agree, it’s unlikely robots are going to be doing less over time. Probably more, over time. What can employees do, and how should firms think about managing them?

Wu: I think one of the biggest things humans tend to underestimate is that we have a lot of tacit knowledge about how work is being done. Right? So robots can only observe a small part. A lot of our knowledge is not codified for a machine to consume. So the more knowledge you have that’s not codified, the better off you are, because there’s no way robots can use that knowledge to replace you.

But then again, that’s very abstract. Right? But if you have been in a firm and an industry for a long time, you have a deep expertise knowledge about that industry or about that firm, it’s always going to be good. Because that knowledge cannot be easily transferrable outside.

Bradlow: I don’t know, I’m sure someone has studied this. Are employees strategic? Like, for example, it’s the classic, “I teach my student, but I don’t teach them everything I know. I teach my student strategically enough. But if I give away everything, then in theory, maybe I could be replaced by a robot.” Is there any studies that have been done on the strategic nature of employees?

Wu: I am sure it has been done, and I think more will have to be done. Because a lot of firms these days are using humans to train robots. I am sure those kinds of strategic behaviors will become more and more prevalent. There is some literature back in the ‘60s and ‘70s, but I think that needs to be updated.

Bradlow: I see. So, do firms have a choice? Is there any choice firms really have, or in some sense — I do a lot of work in sports. Do I really need some former 50, 60-year-old baseball player throwing batting practice? Why don’t I just get a robot to do that? I study things in high tech. Do I really need some sort of human supervision anymore, where I can just have a robot do it? Can firms really compete in scale nowadays, without adopting robotics? Or another way to frame it is, is there any industry you think that is essentially immune to it?

Wu: Based on my study in Canadian data, I think it’s not immune. Because adopters are really just killing the competition. Unless your industry collectively decides not to adopt robots, I think you’re — but then you have to worry about new entrants coming in with robots. So I think you have to be forward-looking, at least, and think about how robots are going to affect your firm and productivity.

But the most important thing is not, “Oh my God. My competitor adopted robots. I need to do it immediately.” I would caution that. Because the value of adopting robots is really to help you understand your existing firm processes, and to help you understand which part can be automated, and which part can be strengthened to complement that automation. In fact, I would say 90 percent of value comes from studying that process, understanding where it can be automated, where it cannot be automated, and how to strengthen that relationship between human and machine in that collaboration.

So this is not a, “Adopt robots, buy some expensive hardware, get some consultant, and you’re done.” This is going to require people in your firm who have been doing that process for a long time, and from them they can tell, “Oh. You know, this part of the process makes sense. Yes, sure, the robots can do it. But I don’t think it’s a good idea, because this is a very risky operation. If something happened to the person, it’s very dangerous, a high-risk situation that should not be automated, even if it can be done.” So it’s really understanding that process and deciding — human decision here. Understanding where it can be automated, and where it shouldn’t, even if it can. That is where the value comes from.

Bradlow: Let me ask a two-part question, but it’s really the same question, which is —so how big an effect are we talking about here? A firm hires a robot, and then some measure — and this is the second part, but it’s really the same question. First of all, in our language as academics, what are the dependent variables that people worry about? Let’s imagine you have a model that says, “If I adopt robots to a certain degree to do a certain task, my”—let’s say I care about sales, or market share, or employee retention, or employee satisfaction. What outcome variables do people tend to study when they think about the adoption of robots? That’s number one. And second, you know, how big an effect size are we talking about here? Of all the ways that a firm could improve efficiency and profitability, is this is the top five? And how big an effect are we talking about?

Wu: That’s a great question. Obviously, as, you know, an economist, statistician, we capture any variable we have in our arsenal. Right? Revenues. Employee satisfaction. Individual productivity performance. Right? And that’s the great thing about the Canadian data set, is that we actually have a lot of that data. We look at employee satisfaction, their incentive pay systems. So we were able to capture a lot of this. So yeah, it is a very important study in terms of what the effect size is.

Bradlow: And what did you find? How big an effect are we talking about? The way we always tend to think about it is, there’s always three populations. At least the way I think about it. Let’s even just say we’re thinking about profits, or surplus. What happens to the firm, what happens to customers, and what happens to society?

Wu: Let me just go by the revenue. Profit first, right? So if you think about a profit-maximizing firm. You know, I adopt one robot, I get $10. And my robot is only costing $1. That means I should just buy more robots. Until I reach exactly ten robots, and then I’ve got the same value in, same going out. I put in $10 robots, I get $10 of output out. That should be the math of that firm, right?

In fact, what we found is that a robot is ten times that factor share. So ten times the value. You put in one, you get ten back. That’s what we’ve found.

Bradlow: But there has to be some sort of diminishing marginal returns to robots. Like, the second robot’s got to be worth maybe a little bit less. Of course, my costs might go down, as well.

Wu: Yeah, exactly. There is that. But what I want to emphasize is not that the robot’s responsible for the whole ten here. Nine of them come from process improvement. The stuff you do to make the robot worth. The robot is one, because that makes sense. Your process maximize is one. But the nine is what you do with it.

Bradlow: I see. And what about for employees? You could imagine one of two things. One is, the employees that are still remaining. Now that the firm’s making more money, you could argue that the employees are in better shape than before. I don’t know, what do you find in terms of employees?

Wu: That’s great. What we’ve found is precisely when humans are working with robots, robots make different kinds of errors than humans. Right? So if you and I work together, we could collude, in a sense. “Let’s shirk today.” Right? “We’ll say our machine died. So let’s just have a coffee.” I’m not saying it’s going to happen, but we could. Right? And a manager, sitting far away where he cannot see you, or at a headquarters, couldn’t figure out, cannot directly observe you. They say, “Oh, okay. I guess the machine wasn’t working. I’ve got to repair it.” Right?

But when you are working with a robot, you can’t do that anymore. Because robots are doing these things, and these kinds of errors are less likely to occur. Because humans or robots have different errors, that means when you’re working with a robot, I can tell your performance. I can capture your performance more accurately than when you’re working with a human.

So what happened is that firms adopting robots also changed their performance pay practices. They actually are rewarding the high performers more so than before. You are actually good performers not because you’re lucky, or some other stuff.

Bradlow: So good performers, in some ways, should be happier with robots.

Wu: Yes.

Bradlow: Because in some sense, there’s less measurement error in their performance.

Wu: Yes. That’s exactly what we found. There are less team-based promotions and incentive pay. But mostly, more individual-based performance pay. To what extent it’s going to happen in the future, I don’t know. Because we see a lot of problems in the individual performance pay, too. But at least right now, the swing of the pendulum is more toward individually based incentive pay.

Bradlow: And how about, you know, policymakers. Should this be a regulated market? Or is this just — I think most people on the surface would say this is bad for society because it’s replacing paying jobs for humans. Although on the other hand, there could be greater efficiencies. How should policymakers in society think about it?

Wu: I think the genie’s out of the bottle. You can’t put it back anymore. You can’t just say, “Let’s not use AI,” because if the U.S. doesn’t use AI, the other country would. And so we have to be competitive on a national front, as well.

In terms of regulations, I think it’s really important we actually study the phenomenon to a greater detail. Now, we see generative AI, and people are getting worried. But we really have to see evidence. Does it actually make the consumer worse off? Right? Does it actually hurt people? Because, you know, remember, my robot study is premised by saying, “Robots will kill 40 to 50 percent of employment.” But we found the opposite. We actually found employment to have gone up, quite a bit. So we really have to have large empirical evidence first, before we make any significant policy changes. Because we don’t want to throttle all this process. We want AI to grow, before we kill it completely without having evidence that it’s going to actually have significant — I’m not saying it won’t have harm. It will. But we need to watch and monitor closely before we make any decisions.

Bradlow: Maybe in the last minute or so that we have, could you tell me, if we were sitting here ten years from now — and maybe we will be. As we look back on those ten years, what are we saying, “This has been the big advance in AI and robotics?”

Wu: Oh, I think generative AI is for sure the most prominent advance in NLP. It pretty much wiped out all the existing natural language processing techniques out there. This just de facto replaced it. So I think this entire field needs to be revolutionized. In terms of, I think the next ten years, we’re going to see that there might not be necessarily technological advancement in that area, per say. It’s really about the application. How is that applied to marketing? How is it applied to schools and education? How is it applied to writing? How is it applied to reporting? As industries are starting to find novel use of this technology, then that’s where we’re going to see how amazing it is.

Think about the internet. The internet was invented in the 1970s, but we didn’t really see the potential of it until 2000, with the dot-com, right? With the internet, with e-commerce. So I think generative AI may be faster, but it will still be a decade or two until we see the real impact.