After decades of slow development, the growth of artificial intelligence is now in hyperdrive. Wharton’s Kartik Hosanagar explains how AI is changing human history, whether we’re ready or not.
Dan Loney: What interested you in studying and working on artificial intelligence?
Kartik Hosanagar: My undergraduate degree was in electronics, and there was a master’s in computer science, so I studied computer programming. But this was back in the ’90s, at a time when AI wasn’t what it is today. In fact, AI mostly failed to deliver on its early promise by then, so interest in AI was diminishing. We had very limited coursework in AI.
That’s the context in which I was first introduced to AI. But when I was in grad school doing my PhD at Carnegie Mellon, I took a course from one of the geniuses of modern times. His name is Herb Simon. He is probably the only person I know who has won the highest award in economics, the Nobel Prize, the highest award in computer science, the Turing Award, and the highest award in psychology. You just tried to register for the course if you could. And I did, without having any interest in the subject.
I remember when I was in that class and he would talk about these things, which were like a mix of psychology, how the human mind works, computer science, economics, how can we take those ideas into the world of AI and computers? That was the first time my interest in this started to get piqued. Nonetheless, my work still wasn’t yet AI. For the next few years, I was working on e-commerce, internet advertising, and so on.
My first genuine interest in this topic came in when I started to see personalized recommendations on Amazon and Netflix and all these places. A student of mine brought up this question of what is it doing to the kinds of products we consume and kinds of media we consume? How is it changing it? I got really interested in this idea that algorithms are influencing decisions we make. That was my first entry into the subject of algorithms broadly, but then within that, AI, as well.
Is AI a General-purpose Technology?
Loney: There is so much conversation going on right now about AI and how it’s going to impact business. AI and business are not new to each other; they’ve been connected for some time, but it feels like the conversation has taken a different level. How do you view that combination and how those two will work in the future?
Hosanagar: Of course, AI is the big buzzword in business. I find often that the business world gets divisive and a bit polarized in the sense that there are the believers who talk about, “AI is going to be a game-changer.” And then there are people who feel like, “This is the next NFT or the next wearable computer or Google glasses or whatever.” Pick your example, where there’s a technology with a lot of hype that goes nowhere.
I’m going to make a big, bold claim here, which is that I think AI is going to be like electricity or like the steam engine or like computers, meaning the kinds of technology that change the world forever, that change humanity forever. There were human lives before electricity, and there are human lives after electricity. It’s going to be like that with AI. And this is not just a statement I’m making based on my gut feel which, by the way, there is gut feel in that statement, but it’s based on real evidence.
Economists and other researchers have studied these kinds of technologies that we refer to as general-purpose technologies. These are technologies like electricity, computers, that are different than other technologies in a few ways. One is that at a macro level, they stimulate a lot of innovation and a huge amount of economic growth. At a micro level, meaning individual firms, they end up changing winners and losers of individual markets because of how companies adopt the technology.
Take the internet, for example. The largest retailer isn’t Walmart. It’s Amazon. Or Kmart, one of the largest retailers before the internet, doesn’t exist today. It changes competitive dynamics fundamentally. And researchers have looked at what are the properties of technologies that go on to become general-purpose technologies? All the early data suggests that AI looks like a general-purpose technology if you look at hiring patterns related to AI, patent filings relative to AI, a number of other things. In fact, there was a recent study by my colleague, Dan Rock, where he looked at specifically large language models like ChatGPT, and his study finds even those models have some of the properties of general purpose technologies.
You started by asking what is the connection to business, and I think my answer is it is going to be fundamentally transformative for business.
Loney: Then you’re talking about a pivot moment. We’ve used the word “pivot” a lot over the last three or four years because of the pandemic and how businesses had to pivot to survive. This is a pivot, but on a much larger scale of where we are going as a society.
Hosanagar: Absolutely. Imagine the pandemic without the internet. We were able to continue to work because of Zoom and other things. The internet was really a general-purpose technology that has changed our lives, and it has had a huge impact over the last 20 years, and certainly the last two or three years. AI will be similar.
We’re just starting to see the early things like ChatGPT, but this is just the start. It’s going to change everything, and companies that don’t wake up to that reality, that want to follow rather than lead, that want to say, “This could be just the next buzzword. We will play it safe.” All the companies that see an early failure, backtrack, and say there’s no ROI in this — like the companies that did that when the dot-com bust happened. Companies that play these kinds of moves will pay a big price, and I think it’s the companies that truly embrace its potential and play the long game that are going to be the big winners from this trend.
Is AI Progressing Too Fast?
Loney: What do you say about some of the recent calls to slow down the development and maybe take more time and really think this out?
Hosanagar: The concerns are legitimate. It is moving very fast. This is a technology that is unlike other technologies we’ve seen in terms of the rate of change and progress. Especially given its implications for simple things like employment, employability, all the way to things like use of AI in warfare, or AI going out of control, there is a range of concerns here. I think the concerns are real.
What is the right solution to those? I’m not yet sold on whether a six-month pause in AI work is going to change anything. First of all, I don’t even think it’s feasible. But let’s say it’s feasible and you’re able to stop people working on these kinds of AI models. What’s going to happen in six months? Nothing. Because it’s not like you’ll find the magic solution. What needs to happen is investments in education at school levels, where people are trained to understand AI, to understand things like deep fakes, to understand issues around ethics when building technology. This is not something you solve in six months. This is something you solve over 10 years, and change curriculum. You need to retrain engineers. You need to retrain managers. You need to also retrain your congressmen and senators and all of the politicians and lawmakers. What are you going to change in six months? Nothing.
Loney: How are companies going to be better than their competition if they’re all using the same type of product, like ChatGPT?
Hosanagar: I think a lot of companies that will use off-the-shelf tools like ChatGPT will create amazing efficiencies that will be copied by a lot of their competitors, which will bring costs down. All of the value will accrue through the eventual customers and users because it will bring prices down.
The second thing that is going to happen is, because they bring prices down, it will help expand markets because it will bring in new customers. And expansion of markets will mean there’s value created for all of those companies equally, meaning all of them gain something. The companies that actually use things like this to get a real advantage over their competitors are going to be companies that are able to pair off-the-shelf AI tools and capabilities with something proprietary. What is that proprietary, complementary asset they can bring to the table is going to be the name of the game for companies that are aggressively investing.
I’ll give you a couple of examples of a proprietary thing they can bring in. You could use an off-the-shelf, large language model like GPT-4, which is basically the underlying model for ChatGPT. But if you’ve got a large proprietary data set of, say, health care information, you can train or retrain those models on your massive health care data set, and now you’ve created a new AI that is the best in class in answering health care questions. And you were able to do that because you had the largest proprietary health care data set.
Somebody can take off-the-shelf AI, but integrate it into a great user experience. That creates a winning combination. For example, there are companies that are trying to build image-editing AI, where you are taking an image and you want some things to be changed. You don’t want to go to Photoshop and do it. You just want to give an instruction to AI, and it does it for you. Great. There are lots of startups doing that.
Many will use the same kinds of AI. They’ll look very similar, but if an Apple does it and integrates it into an iPhone, they can give a seamless experience to the user because you don’t have to download an app. You can take a photo right there and make edits. Google does it and integrates it into Android. That again gives a seamless experience on the phone. That gives them a leg up over anyone else that’s using the same kind of AI. It’s all about pairing it with something proprietary that is also complementary.
Will Generative AI Make Our Lives Easier — or Harder?
Loney: How does AI factor into the discussions on legal and IP issues?
Hosanagar: Yes, there are tons of legal issues around AI. I’ll mention a couple of them. One is what happens when you use AI to make decisions in socially consequential settings, and you do it at large scale? For example, there have been concerns about using AI in courtrooms, for example, to predict the likelihood that a defendant will re-offend. Or what happens if you use AI to do resume screening? Or you use AI to do loan approvals? And it turns out these AI have biases, then a company that uses them in these very important settings exposes itself to litigation. So, that’s one type of issue.
My view on this is: Yes, you can complain all day if you want about potential biases in AI, but before we do that, let’s talk about what’s the alternative. The alternative is fundamentally flawed human decision-makers who have their own biases. It’s not like decisions in courtrooms today or in hiring today are unbiased, and you’re switching to AI that’s more biased. The reality is AI biases are probably easier to detect than human biases and probably easier to correct than AI biases. Companies will have to make sure they are taking sufficient safeguards, auditing their AI sufficiently before they release it in these socially consequential settings.
That’s one set of concerns. The other is related to generative AI. We already saw this play out when this song was released that was supposed to be by Drake. It did really well, took off, and then it turns out somebody created it with AI. So, I think the legal issues there are going to be both on the inward side of generative AI and the outward side.
By inward side, I mean what kind of data are used to train the AI? If you are using data to train AI to create music, where did that training data set come from? Do you have the consent of the musicians who created the music before you trained your system on that? Are you giving them suitable compensation if money is made out of the resulting product? How are you tracking what is each musician’s contribution? You create a brand new song. How do you say 2% of the song is inspired from Jay-Z, 3% from Drake, and 4% from somebody else? How do you even determine that? That’s on the inward side.
On the outward side, if you create a new track in Drake’s voice or in Elvis’ voice, is that allowed? Do you need permissions? There are all these kinds of things. And U.S. copyright law doesn’t even cover synthetic media, so what is the copyright law around creation of content by AI?
Lots of issues have to be tackled in the coming years, and those are the kinds of things where I think lawmakers and lawyers in general will be slow and not progressive, and they will typically just resort to lawsuits. I think we’ll see a lot of lawsuits in the next two to three years.
Loney: What’s the potential impact around AI and the labor market?
Hosanagar: Anyone who is concerned about AI’s potential impact on the labor market is asking the right sorts of questions, in the sense that human history has often had labor concerns associated with technologies. Labor hasn’t always been for technology. But it has almost always been the case that new technologies have created more jobs than they have destroyed, so all of those concerns were misplaced in the past. The real question is, is AI like every other technology in the past, where it will eventually create more jobs than it destroys? Or is it going to ultimately be a net job cannibalizer?
I think that’s the big question. We don’t know the answer. If you were to put a gun to my head and say, “Give me an answer right now, Kartik,” I would say, “Well, I think it’s probably going to have a net job loss rather than a net job creation.” However, that is not the full answer, though. Because AI will also augment jobs and not just merely replace jobs. There will be a lot of jobs where there are a lot of routine things that we do that we don’t enjoy, that we do repetitively, that are soul-sucking in some ways. We will be able to outsource that to AI, and we’ll free up time to do the more interesting, more creative pieces of the job, which I think will be great for all of us.
I also want to distinguish between high-skill and low-skill jobs. The kind of AI that’s been around for the last 10 years is predictive AI. This is AI that makes predictions, and you plug it into different tasks, like predict if a credit card transaction is fraudulent or not, predict if an email is spam or not, things like that. This kind of AI is one type, and then there’s generative AI, which is like ChatGPT that’s creating text, that’s creating images, and so on.
I want to talk about how these impact jobs at different skill levels. Historically, automation has affected blue-collar, low-skilled jobs the most. Is this true for the new kinds of AI, like ChatGPT or image creation? Early research suggests two things. One is that these new kinds of AI increase productivity for workers.
There’s a research study focused on developers using code-generation AI. There’s a research study that was focused on a ChatGPT-like system to improve writing. And all of these show a nearly twofold increase in productivity with just these early forms of AI, and over time, much greater productivity. But what these studies show is also that not all workers benefit equally. The study with developers showed that developers with the lowest skill levels benefited much more than developers with high skill levels. The study with writing showed that writers who had the lower writing skills benefited more so than writers with the highest skills.
One of the things that it also shows is that the new kinds of AI will affect white-collar jobs for sure, but they will also empower workers with lower skill levels and help create an equal playing field. In global commerce today, just knowing English is a path to a job. It’s a path to success. You could have very high intelligence, very high skill, but not knowing English could itself be the bottleneck. You suddenly bring generative AI, and you even the playing field for them. And you can apply this for developers. You can apply this for many other workers.