Workers can stop worrying about being replaced by generative artificial intelligence.
Wharton experts Valery Yakubovich, Peter Cappelli, and Prasanna Tambe believe it isn’t going to happen as drastically as many predict. In an essay published in The Wall Street Journal, the professors contend that AI will most likely create more jobs for people because it needs intensive human oversight to produce useable results.
“The big claims about AI assume that if something is possible in theory, then it will happen in practice. That is a big leap,” they wrote. “Modern work is complex, and most jobs involve much more than the kind of things AI is good at — mainly summarizing text and generating output based on prompts.”
Yakubovich recently spoke to Wharton Business Daily, offering several key facts he hopes will allay people’s fears of robotic replacement. (Listen to the podcast.) First, while generative AI has advanced rapidly, it still has a long way to go before it can function autonomously and predictably, which are key features that make it reliable. Second, large language models (LLMs) like ChatGPT are capable of processing vast amounts of data, but they cannot parse it accurately and are prone to misleading information, known as AI hallucinations.
“You get this output summary — how accurate is it? Who is going to adjudicate among alternative outputs on the same topic? Remember, it’s a black box,” said Yakubovich, who is executive director of the Mack Institute for Innovation Management.
Third, companies are risk-averse and need to maintain a high degree of efficiency and control to be successful. So, they won’t be rushing to lay off all their people in exchange for technology that still has a lot of bugs to work out.
“The risk for companies is very high, and they are not going to move very fast.”— Valery Yakubovich
“If we are thinking 40, 50 years ahead, that’s wide-open ended,” Yakubovich said. “The issue we are discussing now is very the specific [needs] for business. The risk for companies is very high, and they are not going to move very fast.”
The Imperfection of AI
Despite its shortcomings, generative AI has been touted for its ability to handle what many consider to be mundane communication at work — interacting with customers online, producing reports, and writing marketing copy such as press releases. But the professors point out that many of those tasks have already been taken from workers. For example, chatbots handle customer complaints, and client-facing employees are often given scripted language vetted by lawyers.
Yakubovich said most office interaction is informal communication, and a lot of useful organizational knowledge is tacit. While digital tools are increasingly capable of capturing both, nobody wants their emails, Slack chats, or Zoom transcripts freely parsed by an LLM, and the quality of extracted information is hard to verify.
“I haven’t seen any company yet that dared to feed their emails into the models, because you can learn a lot about the company from that. Who wants to give open access?” he said. “It’s very hard to control what the model will produce and for whom. That’s why the models are very hard to use within the organization.”
Companies also don’t want AI involved in politically sensitive matters, especially if there are legal concerns. “What I see so far in talking to senior leaders of companies is that they try to avoid completely using models in politically charged cases because they know they will have more work to do adjudicating among the different parties,” he said.
Data science has been around for years, Yakubovich said, yet many companies still lack good infrastructure to organize the tremendous information that the technology is capable of collecting. Even if they built it, humans are still an indispensable part of making sense of it all.
“If you want to curate everything, it’s a lot of work, and this is where more jobs will emerge,” he said.