Professor Kevin Werbach has spent his career at the crossroads of business, law, and emerging technology. He talks about his latest project, the Wharton Accountable AI Lab, to help guide responsible development and governance of artificial intelligence.
Transcript
What Is Accountable AI?
Angie Basiouny: Welcome to Knowledge at Wharton. I’m here with Kevin Werbach, professor and chair of the Department of Legal Studies and Business Ethics at Wharton. He’s also faculty director of our new Wharton Accountable AI Lab, which is dedicated to advancing responsible development of artificial intelligence. That is what we’re going to talk about today.
Let’s jump right in. Kevin, what is accountable AI? Why did you start this lab?
Kevin Werbach: Accountable AI is about understanding the challenges that AI poses. The starting point is that AI is an incredible innovation, has tremendous potential to create value for businesses and to do a great deal of social good. But we can’t realize that potential, we can’t achieve the benefits of AI, without acknowledging, understanding, and mitigating the risks, and thinking about the potential dangers and harms and problems with AI.
Accountable AI is about not just thinking what are the risks— although that’s part of it— not just asking from an abstract perspective, “What principles should organizations have about what they’re doing with AI?” It’s not just saying, generally, “We should be responsible about AI or have well-governed AI,” although that’s part of it. It’s saying, systematically, how do we put into place the kinds of practices and the understandings that it takes to ensure that AI systems are deployed and developed in the ways that maximize their benefits and appropriately mitigate and address or redress the problems and harms.
“Accountability” is something I chose intentionally. It’s about making those connections. The connection between the risks and the potential or real harms, and what actually happens, to prevent them, to mitigate them, to understand them, to address them. Having all those practices in place and doing it in a thoughtful, systematic, structured, rigorous way. Which, of course, is very consistent with how we think about things at Wharton.
Researching Effective AI Governance and Best Practices
Basiouny: Are you going to develop best practices or prescriptive information for business leaders, for tech companies, about how to use AI?
Werbach: One of the things that I have found in speaking with companies in the research that I do in this area, and as we were putting together the plans for the lab, is that most of them are really struggling to get on top of these issues. They don’t understand what other organizations are doing. There are a few companies who are very far advanced, especially some of the big technology companies have invested significantly in responsible AI or AI governance. But even they have questions about what should they be doing? What are other companies doing? Are they appropriately addressing all of the issues? What does the data show about what kinds of governance mechanisms are effective?
Most companies are not even at that point. So, we are certainly not going to say, “We’ll tell companies what the best practices are.” AI is so diverse, and there’s so many different kinds of AI. There are machine learning systems, there’s generative AI. It’s a different thing if we’re talking about a company that is doing hardcore technical development of AI models, versus a company that may be a very large enterprise but is deploying a system that they are procuring from elsewhere, versus a small startup that is involved in this area. And it depends on what industry you’re in. We are first going to try to understand what organizations are actually doing — what’s successful, what’s not successful, what are the gaps — to try and synthesize some of that to help organizations understand what the possibilities are. And it’s a moving target. It’s going to be an ongoing process of understanding what can be done, what are all the problems that are most concerning, and how can they be overcome?
Basiouny: I want to tell people a little bit about your background. You have a law degree from Harvard. You came to Wharton in 2004, so going on 21 years now. But you also worked in the Clinton administration, the Obama administration. You worked with the FCC on emerging technology. You’ve written four books about technology, including blockchain. You’ve worked on the business and ethical implications of emerging technology. How are the concerns about AI different from what we’ve dealt with in the past? Or is it the same?
Werbach: Some of both. As you note, I’ve been working on emerging technologies my whole career. When I started in the 1990s, that was the internet. I wrote a paper on internet policy at the Federal Communications Commission. This was early on, before I was an academic. At that point, there were something like less than 50 million people on the internet in the entire world, and the vast majority of them were people dialing up on their telephone to the proprietary America Online service. There was not a single person in all of China who had a private internet connection at that point.
Yet we could see the issues that were coming up. We could see that this has a technology that the potential to change the world, and we needed to understand what the issues were. Throughout my career, I’ve tried to get engaged on major, important technology developments early enough to identify the issues, to work on helping to develop the regulatory strategies, work with government, identify and highlight what the problems are, before it was too late. I did that with broadband technology. I did that with something called gamification, which is applying psychological and other techniques from video games to motivate people in different contexts. I did it with blockchain, which was another field that I saw coming that had this diverse potential, but it was still poorly understood. And frankly, it’s still poorly understood today.
I put AI in a similar bucket. We are, in some ways, very far along with AI. If you’re talking about in terms of machine learning technology, AI is decades old. In some ways, though, we’re just at the beginning. We’re just a couple years after the ChatGPT “shot heard around the world” announcement that kicked off this incredible race to exploit the potential and understand the potential of generative AI.
We know there are all these problems. We know there are issues about privacy and bias and intellectual property and manipulation and so on and so forth, and yet we don’t have good solutions. AI is similar to these earlier technologies in that it starts at a point where it has tremendous potential and generates a lot of excitement, but there’s a lack of understanding, broadly, about really whether we’ll realize this potential and what the impacts will be. But every technology is different. And with each of these waves, we build on what came before. AI leverages the fact that we have the internet, and we have these incredible networks and technical capabilities that allow things to be deployed and scaled very fast around the world. We see this tremendous amount of activity and investment going into this space. So, it’s different than it was back 30 years ago, when I was looking at dial-up internet. But it’s similar in that we have this period of uncertainty. And I think that is the point where it’s most important to really dig in. Think about the ethical issues, think about the governance issues, think about the regulatory issues. That’s really the genesis of the Accountable AI Lab.
AI Leaders Must Balance Potential with Caution
Basiouny: In my experience interviewing people about AI, I find that there are three camps. There are the people who fear it, the people who celebrate it, and then there are folks who are just proceeding with caution. What camp do you fall into and why? What’s your overall message about AI, especially heading up this lab?
Werbach: All three. You can’t fear it without celebrating it. Because if you fear it, it means that you believe AI has this incredible potential, that it’s going to be deployed and going to have real impacts. Similarly, you can’t celebrate it without recognizing a whole range of challenges. Some of them are very speculative, but many of them are very real. I talk to lots of companies that say, “Our focus is not on regulation. Whatever the government tells us, we know we’re going to deploy these systems that might have problems. And if we build and deploy something that breaks, it fails, the generative AI system hallucinates and gives false information, that could be a big problem for us with our customers in the marketplace.”
These are companies that are excited about it, but they realize they need to understand the problems. The reality is, there are some aspects where speed is absolutely essential. Companies need to invest. Things are developing so fast. There’s so much potential. You don’t want to get left behind. But you need to understand where there are points where care is warranted, where there is the opportunity and the need to slow down and ask and answer these questions.
Even if the technology is moving really fast, there’s going to be regulation. There are going to be laws passed. There are going to be court cases addressing these issues. You can’t just ignore all of that. You have to appreciate that development of the legal process and of the deeper understandings that come out of research in lots of different fields, not just in law. In terms of, what are the technical capabilities? What can we do to mitigate bias? What is the potential for explanation of generative AI systems? It’s a fascinating area of advanced research. What is the development of ethical, psychological, and behavioral understandings of what’s going on here? That is happening over time. Not at the same speed as the technical development of AI, but it’s going to have a really big impact on being able to realize the full potential of the technology.
Basiouny: Before we go, I do want to let folks know about your podcast. It’s called The Road to Accountable AI. Can you tell us a little bit about it?
Werbach: The podcast is an interview show. I spend 30 to 40 minutes on each episode talking with a guest. It’s a range. I speak with senior government officials from multiple countries. I speak with technologists. I speak with academics. I speak with business executives who are leading the responsible AI groups, or AI governance groups, at some of the largest companies. And I speak with startups that are building tools to address some of these problems. It’s really intended as an educational journey on how this broad area of accountable AI is developing, and trying to help people understand what the state of the art is, and what the questions are that they should be thinking about.