Professor Ethan Mollick, co-director of Wharton Generative AI Labs, examines how artificial intelligence continues to advance without slowing, highlighting its growing adoption by business, potential labor market effects, and the importance of guardrails as organizations prepare for 2026. This episode is part of the “Faculty Predictions” series.

Transcript

Dan Loney: We continue our year-end series talking about artificial intelligence, which we have seen have a growing influence on our lives. But as we turn the calendar into 2026, what should everybody think about where AI is and what it is going to continue to develop into the next year? Pleasure to be joined by Ethan Mollick, associate professor here at the Wharton School. Where do you think we are right now in terms of the development of AI and how we use it, how we think about it, and maybe even more importantly, what we should expect going into the next year?

Ethan Mollick: I think the biggest news from the last year is that there hasn't been a slowdown in AI. Obviously everyone's discussing financial bubbles and other concerns that we can discuss. But the biggest issue — if you look past what the ups and downs of the market are, who wins or who loses — it's that we haven't hit the wall yet. That AI development continues to progress, that AI models have gotten much smarter and better. That for example, there were about 2.5% of super forecasters, which Professor Phil Tetlock, who's another Wharton professor, has been famously tracking— who thought that the AI models would get a gold in the International Math Olympics by 2025. That's 2.5%. It turned out that both Google and OpenAI got golds at the International Math Olympics this year with AI models. So, we're seeing very rapid gains still continue.

Loney: From a business perspective, it seems like there is more and more confidence by the C-suite and by leadership up and down the chain that AI is something they can use as an important tool moving.

Mollick: Yes, we've seen surveys by fellow Wharton professors on this exact topic, showing that a very high percentage of C-suite people are suggesting that they're getting returns from AI now, which is a big change from earlier in this year. There's no sign of momentum slowing down in terms of actual adoption. We've hit a billion users of AI on a regular basis. Again, the questions about business and economics and environment, they're all kind of separate but intertwined kinds of issues there.

Loney: One of which is about labor and how it's going to impact the labor market. And some companies, as we got towards the end of 2025, making job cuts. A lot of companies will be doing that anyway. But the question for a lot of people is, “Will AI have a significant impact on the labor force moving forward?” I think some people say that it will work in unison with employees. Other people say it would replace them.

Mollick: I wish I had a crystal ball. I think the fair thing to say is that AI is probably not responsible for large-scale labor market changes yet, as of the end of 2025. Although Erik Brynjolfsson and company at Stanford have a paper suggesting that there are less juniors being hired and more senior people in fields like software. But it really is unclear at this point. I don't think there's any mass employment from AI. But if you look at where the goal of the AI companies is, it's to be able to build this machine smarter than a human in the next couple of years, to replace all human labor. I think it's a little unrealistic they're going to get there, but we also can't count that out. I think we're going to see this happen. I think the big debate is, how do we do this in a way that's pro-worker, pro- human, rather than just a way of deploying a replacement for people?

Loney: Looking at it from a technology perspective, we already are starting to hear how quantum computing will have an impact in our culture. But how do you see generative AI continuing to differ from other technological developments that may have an impact on our society moving forward?

Mollick: In a lot of ways, gen AI is the sort of big one that's clear. We talked about quantum. We can talk about fusion. But those remain future technologies. Again, there's a billion people using generative AI models on a weekly basis right now. We have evidence that they can do hard level jobs, that they can do hard level math. People are treating these things as companions. They're getting help with school, for better or for worse. I mean, that's already here. The change is baked in. We could stop with AI development today and have 10 years of disruption as we figure stuff out. Quantum computing might change everything, but we are still in experimental quantum computers. AI is deployed at scale already. And for better or for worse, it's become a major industry.

Loney: Where is the growth in terms of the use of generative AI in the next several months and years?

Mollick: Right now, there's no sign of a slowdown in the growth of AI adoption. It's going to move increasingly inside organizations. We're seeing organizational adoption increase as well. And organizations have to rethink processes and approaches to figure out how to use AI effectively. It can't just be chatbots where people ask, “Write my essay for me,” or “Summarize this data for me.” It's going to have to be a deeper form of interaction. So, be watching for that kind of agentic work starting to appear, where people work deeply with AI, assign AI tasks that it completes, and so on.

Loney: You've talked in the past about having guardrails around some of this. How important are the guardrails as we move forward?

Mollick: The guardrails are an important part of long-term AI safety, and also immediate safety. We know people consult AI on psychological matters, legal matters, medical issues. Having the AI answer accurately and correctly and tell people to seek help when needed, and answer when it's ethical to do so, remains a very large problem.