AI is here and it’s not going away. Wharton professors Kartik Hosanagar and Stefano Puntoni join Eric Bradlow, vice dean of Analytics at Wharton, to discuss how AI will affect business and society as adoption continues to grow. How can humans work together with AI to boost productivity and flourish? This interview is part of a special 10-part series called “AI in Focus.”

Watch the video or read the full transcript below.

Transcript

Eric Bradlow: Welcome, everyone, to the first episode of the Analytics at Wharton and AI at Wharton podcast series on artificial intelligence. My name’s Eric Bradlow. I’m a professor of marketing and statistics here at the Wharton School. I’m also vice dean of Analytics at Wharton, and I will be the host for this multi-part series on artificial intelligence.

I can think of no better way to start that series, with two of my friends and colleagues who actually run our Center on Artificial Intelligence. The title of this episode is “Artificial Intelligence is Here.” As you will hear, we’ll do episodes on artificial intelligence in sports, artificial intelligence in real estate, artificial intelligence in health care. But I think it’s best to start just with the basics.

I’m very happy to have join with me today, first, my colleague Kartik Hosanagar. Kartik is the John C. Hower Professor at the Wharton School. He’s also, as I mentioned, the co-director of our Center on Artificial Intelligence at Wharton. And normally, I don’t read someone’s bio. First of all, it’s only a few sentences. But I think this actually is important for our listeners to understand the breadth and also the practicality of Kartik’s work. His research examines how AI impacts business and society, and something you’ll hear about is, that is what our center does. There’s kind of two prongs. Second, he was a founder of Yodle, where he applied AI to online advertising. And more recently and currently, to Jumpcut Media, a company applying AI to democratize Hollywood. He also teaches our courses on enabling technologies and AI business and society. Kartik, welcome.

Kartik Hosanagar: Thanks for having me, Eric.

Bradlow: I’m also happy to have my colleague, Stefano Puntoni. Stefano is the Sebastian S. Kresge Professor of Marketing here at the Wharton School. He’s also, along with Kartik, the co-Director of our Center on AI at Wharton. And his research examines how artificial intelligence and automation are changing consumption and society. And similar to Kartik, he also teaches our courses on artificial intelligence, brand management, and marketing strategies. Stefano, welcome.

Stefano Puntoni: Thank you very much.

Bradlow: It’s great to be with both of you. So maybe, Kartik, I’ll throw the first question out to you. While artificial intelligence is now the big thing that every company is thinking about, what do you see as— well, first of all, maybe even before what are challenges facing companies, how would we even define what artificial intelligence is? Because it can mean lots of things. It could mean everything from taking texts and images and stuff like that, and kind of quantifying it, or it could be generative AI, which is the same side of the coin, but a different part. How do you even view, what does it mean to say “artificial intelligence”?

Hosanagar: Yeah. Artificial Intelligence is a field of computer science which is focused on getting computers to do the kinds of things that traditionally requires human intelligence. What that is, is a moving target. When computers couldn’t play, say, a very simple game like— well, chess is not simple, but maybe even simpler board games. Maybe that’s the target. And then when you say computers can play chess, and when that’s easy for computers, we no longer think of that as AI.

But really, today, when we think about what is AI, it’s again, getting computers to do the kinds of things that require human intelligence. Like understand language. Like navigate the physical world. Like being able to learn from experiences, from data. So, all of that really is included in AI.

Bradlow: Do you put any separation between what I call— maybe I’m not even using the right words — traditional AI, which again back in my old days, we’ve had AI around, “How do you take an image, and turn it into something?” “How do we take video, how do we take text?” That’s one form of AI versus what’s got everybody excited today, which is ChatGPT, which is a form of large language model. Do you put any differentiation there? Or that’s just a way for us to understand. One is creation of data, and the other one is using it in an application of forecast and language.

Hosanagar: Yeah, I feel there is some distinction. But ultimately, they’re closely related. Because what we think of as the more traditional AI, or predictive AI, it’s all about taking data and understanding the landscape of the data. And to be able to say, “In this region of the data,” let’s say you’re predicting whether an image is about Bob, or is it about Lisa? And so you kind of say, “In the image space, this region, if the shape of the colors are like this, the shape of the eyes are like this, then it’s Bob. In that area, it’s Lisa.” And so on. So, it’s mostly understanding the space of data, and being able to say, with emails, is it fraudulent or not? And saying which portion of the space does it have one value versus the other.

Now, once you started getting really good at predicting that, then you can start to use those predictions to create. And that’s where it’s the next step, where it becomes generative AI. Where now you are predicting, what’s the next word? You might as well use it to start generating text, and start generating sentences, essays and novels, and so on.

Bradlow: Stefano, let me ask you a question. If one went to your web site on the Wharton web site — and by the way. Just for our listeners, Stefano has a lot of deep training in statistics. But most people would say, “You’re not a computer scientist. You’re not a mathematician. What the hell do you have to do with artificial intelligence?” Like, “What role does consumer psychology play in artificial intelligence today? Isn’t it just for us math types?”

Puntoni: If you talk to companies and you ask them why did your analytics program fail, you almost never hear the answer, “Because the models don’t work. Because the techniques didn’t deliver.” It’s never about the technical stuff. It’s always about people. It’s about lack of vision. It’s about the lack of alignment between decision makers and analysts. It’s about the lack of clarity about why we do analytics. So, I think that a behavioral science perspective on analytics can bring a lot of benefits to try to understand how do we connect decisions in companies to the data that we have? That takes both the technical skills and the human insights, the psychology insights. I think bringing those together, I find that has a lot of value and a lot of potential insights. A lot of low-hanging fruits, in fact, in companies, I think.

Bradlow: As a follow-up question, we all read these articles that say 70% of the jobs are going to go away, and robots or automation or AI is going to put me out of business. Should employees be happy with what’s going on in AI? Or the answer is, it depends who you are and what you’re doing? What are your thoughts? And then Kartik, I’d love to get your thoughts on that, including the work you’re doing at Jumpcut. Because we all know one of the biggest issues in the current writer’s strike was actually what’s going to happen with artificial intelligence? I’d love to hear your thoughts from the psychology or the employee motivation perspective, and then, what are you seeing actually out in the real world?

Puntoni: The academic answer to any question would be, “It depends. It depends.” But in my research, what I’ve been looking at is the extent to which people perceive automation as a threat. And what we find is that oftentimes when tasks are being automated by AI, for example, our tasks have to have some kind of meaning to the person. That they are essential to the way that they see themselves, for example, in their professional identity. That can create a lot of threat.

So, you have psychological threats, and then you have these objective threats of maybe jobs on the line. And maybe you’ll feel happy about knowing that I try out the professor job on some of these scoring algorithms, and we are fairly safe from our own replacement.

Bradlow: Kartik, let me ask you. And let me just preface this with saying, you probably don’t even know about this. Fifteen years ago, I wrote a paper with a former colleague and a doctoral student about how to use— I didn’t call it AI back then. But how to, basically, in large scale, compute features of advertisements and optimally design advertisements based on a massive number of features. And I remember the reaction. I first thought I was going to get rich. I went to every big media agency and said, “You can fire all your creative people. I know how to create these ads using mathematics.” And I was looked at like I had four heads. So, can you bring us up to the year 2023? Can you tell us what you’re doing at Jumpcut, and what role AI machine learning plays in your company, and just what you see going on in the creative world?

Hosanagar: Yeah. And I’ll connect that to, also, what you and Stefano just brought up about AI and jobs and exposure to AI and so on. I just came from a real estate conference. And the panel before I spoke was talking about, “Hey, this artificial intelligence, it’s not really intelligence. It just replicates whatever in some data. The true human intelligence is creative, problem-solving, and so on.” And I was sharing over there that there are multiple studies now that talk about what can AI do, and cannot do. For example, my colleague, Daniel Rock, has a study where he shows that just LLMs, meaning large language models like ChatGPT, and before the advances of the last six months— this is as of early 2023— they found that 50% of jobs have at least 10% of their tasks exposed to LLMs. And 20% of jobs have more than 50% of their tasks exposed to LLM. And that’s not all of AI, that’s just large language models. And that’s also 10 months ago.

And people also underestimate the nature of exponential change. I’ve been working with GPT2, GPT3, the earlier models of this. And I can say every year the change is order of magnitude. And so, you know, it’s coming. And it’s going to affect all kinds of jobs. Now, as of today, I can say that multiple research studies— and I don’t mean two, three, four— but several dozen research studies that have looked at AI’s use in multiple settings, including creative settings like writing poems or problem-solving or so on— find that AI today already can match humans. But human plus AI today beats both human alone and AI alone.

For me, the big opportunity with AI is we are going to see productivity boost like we’ve never seen before in the history of humanity. And that kind of productivity boost allows us to outsource the grunt work to AI, and do the most creative things, and derive joy from our work. Now, does that mean it’s all going to be beautiful for all of us? No. There are going to be some of us who, if we don’t reskill — if we don’t focus on having skills that require creativity, empathy, teamwork, leadership, those kinds of skills — then a lot of the other jobs are going away, including knowledge work. Consulting, software development. It’s coming into all of these.

Bradlow: Stefano, something Kartik mentioned in his last thing was about humans and AI. As a matter of fact, one of the things I heard you say from the beginning is, it’s not humans or AI. It’s humans and AI. How do you really see that interface going forward? Is it up to the individual worker to decide what part of his/her/their tasks to outsource? Is it up to management? How do you see people being even willing to skill themselves up in artificial intelligence? How do you see this?

Puntoni: I think this is the biggest question that any company should be asking, not just about AI right now. Frankly, I think the biggest question of all in business — how do we use these tools? How do we learn how to use them? There is no template. Nobody really knows how, for example, generative AI is going to impact different functions. We’re just learning about these tools, and these tools are still getting better.

What we need to do is to have some deliberate experimentation. We need to build processes for learning such that we have individuals within the organizations tasked with just understanding what this can do. And there’s going to be an impact on individuals. It’s going to be an impact on teams, on work flows. How do we bring this in, in a way that we just maybe don’t simply think of re-engineering a task to get a human out of the picture. But how do we re-engineer new ways of working such that we can get the most out of people? The point shouldn’t be human replacement and obsolescence. It should be human flourishing. How do we take this amazing technology to make our work more productive, more meaningful, more impactful, and ultimately make society better?

Bradlow: Kartik, let me take what Stefano said and combine it with something that you said earlier, which was about the exponential growth rate. My biggest fear if I were working at a company today — and please, I’d love your thoughts— is that someone’s using a version of ChatGPT, or some large language model, or even predictive model. Some transformer model. And they fit it today, and they say, “See? The model can’t do this.” And then two weeks later, the model can do this. Companies, in some sense, create these absolutes. Like, you just mentioned you were at a real estate company. “Well, ChatGPT or large language models, AI, can’t sell homes. They can build massive predictive models using satellite data.” Maybe they can’t today, but maybe they can tomorrow. How do you, in some sense, try to help both researchers and companies move away from absolutes in a time of exponential growth of these methods?

Hosanagar: Yeah. I think our brains fundamentally struggle with exponential change. And probably, there is some basis to this in studies people have done on neuroscience or human evolution and so on. But we struggle with it. And I see this all the time, because I’ve been part of that. My work has been part of that exponential change from the very beginning. When I started my Ph.D., it was about the internet. And I can’t tell you the number of people who looked at the internet at any given point of time and said, “Nobody will buy clothing online. Nobody will buy eyeglasses online. Nobody would do this. Nobody would do that.” And I’m like, “No, no. It’s all happening. Just wait to see what’s coming.”

I think it’s hard for people to fathom. I think leadership, as well as regulators, need to realize what’s coming, understand what exponential change is, and start to work. You brought up previously, and I forgot to address it, about the Hollywood writer’s strike. Now, it is true that today, ChatGPT cannot write a great model. However, when we work with writers, we are already seeing how they can increase the productivity for writers. And in Hollywood, for example, writers are notorious because writing is driven by inspiration. You’re expecting the draft today. And what’s the excuse? “Oh, I’m just stuck at this point. And when I get unstuck, I’ll write again.” You can wait months and sometimes years for the writer to get unstuck.

Now, you give them a brainstorming buddy, and they start getting unstuck and it increases productivity. And yes, they’re right in fearing that at some point they’re going to keep interacting with the AI, and keep training the AI, and someday the AI is going to say, “You know what? I’m going to try to write the script myself.” And when I say the AI is going to say that, I mean the AI is going to be good enough, and some executive is going to say, “Why deal with humans?” And do that.

I think we need to both recognize that change is that fast and start experimenting and start learning. And people need to start upping their game and reskilling and get really good at using AI to do what they do. That reskilling is important. Stop viewing this as a threat. Because what’s happening is, you’re standing somewhere and there’s a fast bullet train coming at you. And you’re saying, “That train is going to stop on its own.” No, it’s going to run over you. And the only thing you can do and you have to do is get to the station, board the train, and be part of that train and help shape where it goes. All of us need to help shape where it goes.

Bradlow: Yeah. One example I like to give is that for 25-plus years I’ve been doing statistical analysis in R. And of course, for the last five to seven years, Python’s taken a much larger role. And I always promised myself I was going to learn Python. Well, I’ve learned Python now. I stick my R code into ChatGPT, and I tell it to convert it to Python. And I’m actually a damn good Python programmer now, because ChatGPT has helped me take structured R code and turn it into Python code.

Hosanagar: That’s a great example. And I’ll give you two more examples like that. The head of product at my company, Jumpcut Media, had this idea for a script summarization tool. What happens in Hollywood is the vast majority of scripts written are never read because every executive gets so many scripts. And you have no time to read anything. And you end up prioritizing based on gut and relationships. “Eric’s my buddy. I’ll read his script, but not this guy, Stefano, who just sent me a script. I don’t know him.” And that’s how decision-making works in Hollywood.

So, the head of product, who’s not a coder — he’s actually a Wharton alumnus — had this idea for a great script summarization tool that would summarize things using the language and parlance of Hollywood. And he had the idea to build the tool, but he’s not a coder. Our engineers were too busy with other efforts, so he said, “While they’re doing that, let me try it on ChatGPT.” And he built the entire minimal viable product, a demo version of it, on his own, using ChatGPT. And it’s actually on our web site on Jumpcut Media, where our clients can try it. And that’s how it got built. A guy with no development skills.

I actually demonstrated, during this real estate conference, this idea that you post a video on YouTube, you’ve got 30,000 comments on YouTube, and you want to analyze those comments and figure out, what are people saying? You want to summarize it. I went to ChatGPT, and I said, “Six steps. First step, go to a YouTube URL I’ll share, download all the comments. Second step, do sentiment analysis of that. Third step, find the comments which are positive and send it to OpenAI and give me the summary of all the positive comments. Fourth step, negative comments, send it to OpenAI, give the summary. Fifth step, tell the marketing manager what you should do, and give me the code for all this.” It gave me the code in the conference with all these people. I put it in Google Collab, ran it, and now we’ve got the summary. And this is me writing not a single line of code, with ChatGPT. It’s not the most complex code, but this is something that previously would have taken me days and I would have had to involve RAs and so on. And I can get that done.

Bradlow: Imagine in real estate doing that about a property, or a developer. And you say it doesn’t affect real estate. Of course it does! Absolutely, it could.

Hosanagar: It does. I also showed them, I uploaded four photographs of my home. Nothing else. Four photographs. And I said, “I’m planning to list this home for sale. Give me a real estate listing to post on Zillow that will make people read it and get excited to come and tour this house.” And it gave a great, beautiful description. There’s no way I could have written that. I challenged them, how many of you could have written this? And everyone at the end was like, “Wow. I was blown away.” And that is something that is doable today. I’m not even talking where this is coming soon.

Bradlow: Stefano, I’m going to ask you and then I’ll ask Kartik as well, what’s at the leading edge of the research you’re doing right now? I want to ask each of you about your own research, and then I’ll spend the last few minutes that we have talking about AI at Wharton and what you guys are doing and hoping to accomplish. Let’s start with our own personal research. What are you doing right now? Another way I like to frame it is, if we’re sitting here five years from now and you have a bunch of published papers and you’ve given a lot of big podium talks, which I know you do, what are you talking about that you had worked on?

Puntoni: Working on a lot of projects, all in the area of AI. And there are so many exciting questions. Because we never had a machine like this, a machine that can do the stuff that we think is crucial to defining what a human is. This is actually an interesting thing to consider. When you went back in time a few years and you asked, “What makes humans special?” people were thinking, maybe compared to other animals, “We can think.” And now you ask, “What makes a human special?” and people think, “Oh, we have emotions, or we feel.

Basically now, what makes us special is what makes us the same as other animals, to some extent. You see how the world is really deeply changing. And I’m interested in, for example, the impact of AI for the pursuit of relational goals, or social goals, or emotionally heavy types of tasks, where previously we never had an option of engaging with a machine, but now we do. What does that mean? What are the benefits that this technology can bring, but also, what might be the dangers? For example, for consumer safety, as people might interact with these tools while experiencing mental health issues or other problems. To me, that’s a very exciting and important area.

I just want to make a point that this technology doesn’t have to be any better than it is today for it to change many, many things. I mean, Kartik was saying, rightly, this is still improving exponentially. And companies are just starting to experiment with it. But the tools are there. This is not a technology around the corner. It’s in front of us.

Bradlow: Kartik, what are the big open issues that you’re thinking about and working on today?

Hosanagar: Eric, there are two aspects to my work. One is slightly more technical, and the other is focused more on humans and societal interactions with AI. On the former side, I’m spending a lot of time thinking about biases in machine-learning models, in particular a few studies related to biases in text-to-image models. For example, you go in and you write a prompt, “Generate an image of a child studying astronomy.” If all 100 images are of a boy studying astronomy, then you know there’s an issue. And these models do have these biases, just because the training data sets have that. But if I get an individual image, how do I know it’s OK or not? We’re doing some work on detecting bias, debiasing, on automated prompt engineering as well. So, you state what you want, and we’ll figure out how to structure the prompt for a machine learning model to get the kind of output you want. That’s a bit on the technical side.

On the human and AI side, most of my interest is around two themes. One is human-AI collaboration. So, if you look at any workflow in any organization where AI now can touch that workflow, we do not understand today what is ideally done by humans and what is done by AI. In terms of organization design and process design, we understand historically, for example, how to structure teams, how to build team dynamics. But if the team is AI and humans, how do we structure that? What should be done by whom? I have some work going on there.

And the other one is around trust. AI has a huge trust problem today. We were just talking about the writers’ strike. There’s an actors’ strike, and many more issues coming up. So, what does it take to drive human trust and engagement with AI is another theme I’m looking at.

Bradlow: Maybe in the last few minutes or so, Stefano, can you tell us a little bit, and our listeners here on Sirius XM and on our podcast, about AI at Wharton and what you’re hoping to study and accomplish through a center on artificial intelligence here at Wharton? And then we’ll get Kartik’s thoughts as well.

Puntoni: Thank you for organizing this podcast, and Sirius for having us. I think it’s a great opportunity to get the word out. The initiative AI at Wharton is just starting out. We are a bunch of academics working on AI, tackling AI from different angles for the purpose of understanding what it can do for companies, how it can improve decision-making in companies. But also, what are the implications for all of us? As workers, as consumers, and society broadly?

We’re going to try initiatives around education, around research, around dissemination of research findings, and generally, try to create a community of people who are interested in these topics. They’re asking similar questions, maybe in very different ways, and can learn from one another.

Bradlow: And Kartik, what are your thoughts? You’ve been involved with lots of centers over the years. What makes AI at Wharton special, and why are you so excited to be in one of the leadership positions of it?

Hosanagar: Yeah. I think, first of all, to me, AI is maybe not even a once-a-generation, but once-several-generation kind of technologies. And it’s going to open up so many questions that will not be answered unless we create initiatives like ours. For example, today, computer scientists are focused on creating new and better models. But they’re focused on assessing these models somewhat narrowly, in terms of accuracy of the model, and so on, and not necessarily human impact, societal impact, some of these other questions.

At the same time, industry is affected by a lot of this. But they’re trying to put the fire out, and they’re focused on, what do they need to get done this week, next week? They’re very interested in the questions of, where will this take us three, four years later? But they have to focus quarter by quarter.

I think we are uniquely positioned, here at Wharton, in terms of having both the technical chops to understand those computer science models and what they’re doing, as well as people like Stefano and others who understand the psychological and the social science frameworks, who can bring in that perspective and really take a five, 10, 15, 25-year timeline on this and figure out, what does this mean for how organizations need to be redesigned? What does this mean in terms of how people need to be reskilled? How do our own college students need to be reskilled?

What does this mean for regulation? Because, man, regulators are going to struggle with this. And while the technology is moving exponentially, regulators are moving linearly. They will need that thought leadership as well. So, I think we fill that gap uniquely in terms of those kinds of problems. Big, open issues that are going to hit us in five, 10 years, but we are currently too busy putting out the fires to worry about the big avalanche coming our way.

Bradlow: Well, I think anybody that has listened to this episode will agree, artificial intelligence is here — which is what the title of this episode was. Again, I’m Eric Bradlow, professor of marketing and statistics here at the Wharton School, and vice dean of analytics. I’d like to think my colleagues, Stefano Puntoni and Kartik Hosanagar. Thank you for joining us on this episode.

Hosanagar: Thank you, Eric.

Puntoni: Thank you.