Wharton’s Daniel Rock analyzes how AI could increase productivity in the workplace and what’s standing in the way of widespread implementation. This episode is part of a series on “Innovation” that was produced in cooperation with Mack Institute for Innovation Management.

Transcript

The Role of AI in Shaping Productivity

Dan Loney: When we think about innovation these days, there’s a good chance that artificial intelligence is going to come into the conversation. Even though it’s been around for some time, it only now feels like the majority of the public at large is seeing the impact of artificial intelligence on our lives. Daniel Rock is an assistant professor of operations, information and decisions here at the Wharton School. He and his colleagues have looked at how AI can impact something like productivity. Dan, great to have you here today. Thanks for your time.

Daniel Rock: Great to be here. Thanks for having me.

Dan Loney: When I bring up AI, doesn’t it seem like productivity is a natural first thing for people to think about?

Rock: Absolutely. When we talk about productivity, it’s important to define terms here. For economists, productivity can be a few different things — it can be how much output per worker you have, how much revenue per unit of input. But generally, all of it points to one big idea: what are the number of outputs we get per unit of input? It’s not, how do we cut jobs or reduce the resource use. It’s also, how do we create more? And with these tools empowering people to do greater and more interesting things, productivity in the long run has to be positively impacted by what we can do with them.

Loney: You say, though, there’s a paradox when you think of this.

Rock: Yeah, sure. I think here’s the core thing with a sufficiently transformative technology, what economists would call a general-purpose technology. That is, it’s pervasive, it improves over time, and then it kind of necessitates and spawns complementary innovation — that is, the other stuff you need to build to get this stuff to go.

So, yeah, there’s a lag. It takes a long time to build up those additional assets, to reconfigure your organization, to train people to use stuff. Over time, that’s going to pay off in a big way, and we’re seeing people make huge investments in that. But it’s not going to be super powerful right off the bat. Actually, with AI, there are some applications that are — but the long-run implications are going to take a while to play out, I think.

Mismeasurement and Misalignment: Understanding the Challenges of AI

Loney: You have four areas of potential impact that you’ve come up with in the work that you’ve done, the first being false hopes. Explain that a little bit?

Rock: Oh, yeah. This is the explanation for the paradox of why it takes a while. I’ve already preempted what I think is going on, but yes, there’s the chance — this is sort of a Bob Gordon view, [though] I don’t want to put too many words in his mouth — that AI just isn’t that big a deal. You could broaden this to say any technology just isn’t that big a deal. We see lots of promise and hype, but it’s just never going to materialize. That’s a consistent way to view the world in the early stages if you don’t know what’s going to happen. But then, you do have to change tack if you see that the benefits start to show up. I think with AI, we’re starting to see that a bit, so that’s number one.

The second one is mismeasurement. There’s some folks in Silicon Valley who say this, and there’s some evidence that this might be going on, too. The idea here is that the gains are real — they’re happening — but we’re not capturing them properly in the economic statistics. And I think the folks at the BLS (Bureau of Labor Statistics) and the BEA (Bureau of Economic Analysis) do a really great job of trying to measure the economy. Where it might be tougher to measure things is [with services like] Google being free. I asked my MBAs, “Would you rather have search, or indoor plumbing?” After trying to wriggle out of that conundrum, many of them still pick search over indoor plumbing. I’m kind of with them on that one.

Loney: That’s probably a good idea.

Rock: It gets cold in Philly, but not too cold.

Loney: Exactly.

Rock: So that’s the second one. We could be mismeasuring things. And yes, there are some cases where that may be the case. But in general, you have to make an argument for why it’s different now. What changed to make us worse at measurement, given what the economy is producing? And I think that’s a tougher case. My co-author, Chad Syverson, at the University of Chicago, has kind of disposed that argument — at least up until 2017 or so. So that’s the second one.

The third one is sort of a rent dissipation argument. What does that mean? It’s that the gains are real, but they’re accruing to a really small proportion of people in the economy. They’re taking all of the gains, and nobody else is seeing anything there. I think you could make an argument that a lot of that is still happening, but it would have to be really enormous to take away the gains from the technology, given the expectations.

And then the last one, which we just discussed, restructuring and implementation lags. That stuff can take a while. We see a lot of promise, but let’s not confuse a clear view for a short walk. It’s going to take a long time to implement this stuff.

Loney: Is the expectation, [given] where we are currently, that we’re still going to see innovation coming from other areas to complement what a lot of people believe is the core of AI right now?

Rock: Yeah, absolutely. I think that’s already happening in a lot of areas. One of the really cool things about AI is— I don’t know if you code or use coding assistants, but the fact that these tools [and] the software you need to augment the AI tools can be partially written with AI help. So we really get in this nice flywheel, where you use AI to improve the sorts of tooling you need to make AI more effective. When I talk with companies, I’m like, “Are you guys doing this? Because you should be. It’s very helpful.”

Loney: There’s an element that you talk about in this paper regarding the added costs being a type of capital, that can be just part of the build-out, correct?

Rock: Yeah, absolutely. When you have these adjustment costs or fixed costs of investment, these are things like the training or intangible capital, or even the culture around how you build an organization that works with machine learning or AI software. This is a different type of software. It creates output that’s non-deterministic or maybe a little bit fuzzier. It’s not perfect every time. It’s not cookie-cutter. That’s a mindset shift, too. You can’t expect the same results as you could with an ordinary kind of rules-based software.

You have to pay that upfront cost, or maybe even ongoing cost, to keep people in the loop [and] to structure your organization processes properly. Then what you get out of that is a competitive advantage, basically. You can do things other companies can’t, if you crack that.

Loney: And obviously, part of that probably feeds into […] the value that those companies have. That’s a significant beneficial capital component they have to their companies, which is larger than other companies.

Rock: That’s a really fascinating point. I mean, you think about the value of OpenAI, Anthropic, or even Microsoft, Google — you know, whoever’s building [the technology]. Llama. A lot of the value of those companies is in the complementary investments that their customers, or the ecosystem at large is making. That’s really interesting, right? They get more valuable as their customers and consumers learn how to integrate that toolkit.

I think that’s something that will take a little while, but they’re well aware of it, too. They’re trying to make it easier to use. In some sense, ChatGPT is more of a UX innovation. The playground existed — you could use stuff before — but ChatGPT just showed people, “Hey, you can really engage with these models and do something cool.” I massively updated how important I thought UX was after I saw the success of that app.

Is AI Redefining the Future of Work?

Loney: You mentioned some of the metrics that will come into play. It feels like we’re still at a point where the development of some of those metrics either hasn’t happened or is ongoing right now. So, it may be hard to truly gauge the value or the component of productivity, especially when we don’t have the dynamics fully tweaked to what we need, right?

Rock: Yes, I agree with that. Though, what makes this wave of software [AI] a little bit easier to do a good job, is we’re using the last wave of IT to instrument it, right? So we have software to track the software. Before, the best you could do — the best you could hope for — was surveys of some kind. Say, in the early ’90s. Now, we can scale up those efforts. You can track what some of my colleagues called “digital exhaust”. You connect to APIs for companies [and] you can see how they’re changing what they’re doing. You know, there’s little pockets. This is like this big iceberg, and we’re seeing just the tip of it. But you can use those points, where we can see the tip of the iceberg, and how it’s changing as a way to gauge what’s going on.

And the concrete example is something I’ve done in my own work. I’ve tracked how many people with AI skills are being hired, company by company, assuming that if you’re hiring people to do this, you’re probably building out all the other complements to make them effective. And we’re trying to measure the size of the investments on that front.

Loney: For many companies, the way you need to do it is to bring on the talent before you actually do the level of implementation to get to that point, so you have people who understand it, going into the process.

Rock: One hundred percent. You can’t get away from labor markets and those complementary investments. If you do AI well, then you probably did data science well before. If you did data science well, you probably did the cloud well before. There’s a whole stacking of these technologies. It actually makes AI super-concentrated in only a few firms right now. And it’s a little bit of an explanation where that might be coming from.

But yeah. You’re bottlenecked in three potential areas. It’s either talent, data, or [computing] right now. But you mix those three things together in the right proportions, and you start to get AI internal capabilities.

Loney: What do you think, then, doing this research helped you to better understand about where we are going, and that connection between AI and productivity?

Rock: I started to think, what are the ingredients inside of a company that would generate productivity? Where does it come from? And there are a few models that colleagues in other places have put together that I can use as workhorse models. There’s the task-based approach. There are researchers, like David Autor and Daron Acemoglu at MIT, who have used this heavily. The idea is, let’s break down a job into a bundle of tasks and track how all those tasks, or those bundles, are changing. I think the more change you see at that level, the more of an indication you have that something is different now. So that’s one early check you can do to see what’s going on. I’m working on some of that, looking at job postings with a few colleagues.

And then there’s a perspective that Tim Bresnahan at Stanford as well as Joshua Gans, Avi Goldfarb, and Ajay Agarwal at University of Toronto, have put forth. It’s a sense that nobody’s ever lost their job to task-based automation. It’s not happening task by task, necessarily, or change isn’t happening task by task. It’s happening at the system level. So, when we change the direction of what’s possible with a model — like I can discover new drugs with these tools or I can […] make pretty lousy images or paintings that I couldn’t do before with my new capabilities. As you give people these new capabilities, you redesign the system, and that new system’s got different demands for people, different demands for capital.

So let’s see if there are companies making big changes, saying, “We’re going to reconfigure this module.” It’s kind of hard to do that if things are moving really quickly. You don’t have certainty. You don’t feel like you’re standing on solid ground when you do that.

Loney: Does it feel like, with where we are with AI and the workforce right now, that obviously AI is going to play a significant role, but the human component will be there? And to a degree, maybe the human component even becomes more of a learning experience as we move forward, because of how AI is guiding the ship a little bit here?

Rock: Yeah. I get accused of being an optimist on this point, so I strongly agree with that. But I do think there are, of course, going to be pockets where things go better or worse. One thing I’ve grown fond of saying recently is that we can’t get away from labor markets. I will say, perhaps it’s a little bit of a stretch, I don’t want to get too far out of my skis here, but the augmentation versus automation debate matters at some unit of analysis. But at the individual worker or manager level, making a decision about where to deploy the technology, you can augment someone who can do the job of 20 people. And if the company doesn’t want 20 people to do it, then that’s not great news — for the workers, that is. On the other hand, you can automate things that people hate doing and refocus their work onto other stuff where demand expands.

These are choices that companies, managers, and workers can make. They’re not foregone conclusions. And I think I have confidence in the talent of people out there in the world to make good choices, and ultimately end up in a more fulfilling work configuration scenario.