Wharton’s Philip Nichols explains why AI isn’t useful for combatting corruption, at least not yet. This episode is part of the “Research Roundup” series.

Transcript

Should AI Be Used to Fight Corruption?

Dan Loney: As we continue to hear that artificial intelligence is the technology that every firm should implement, our guest today questions whether that can be the case when you're thinking about things like anti-corruption efforts by countries and agencies around the globe. Pleasure to be joined here in studio by Philip Nichols, professor of legal studies and business ethics here at the Wharton School.

You wrote about that question exactly in a recent article for the American Business Law Journal. Let's dig in right into the guts of this, because everybody thinks that every company needs to have AI in everything that they are doing. How does AI fit in, or maybe not fit in, the efforts to prevent bribery, corruption, etc.?

Philip Nichols: Great question. To answer that question, we need to understand two things about corruption and two things about artificial intelligence. The first, about corruption. It manifests itself differently everywhere. The corruption you experience in one country is different than the other. In one industry, different than the other. In one firm, different than the other. And that means data is not fungible. Data is not easily translatable or usable from one to the other.

The second thing to understand about corruption, and compliance in general, is that misbehavior, corruption occurs in the shadows. People don't want to generate a lot of data about it, right? Two things to understand about artificial intelligence, in addition to it using a lot of energy and processing power, is it requires a solid model of what the world looks like, and it requires a lot of data.

If we put those two things together, and not even talking specifically about what we're asking artificial intelligence to do, but just whether artificial intelligence is viable with this particular realm of compliance, we don't have the data and we don't have the model. Therefore, what we're likely to get when we ask questions of artificial intelligence is hallucinations or nonsense.

Loney: I guess with the issue of compliance, you have to bring in the point that the rules in Europe are different than what we see here in the United States and other parts of the world. We think about the use of AI as something that is global. We think about the ways of protecting from corruption as global. Yet you have these different sets of rules in place.

Nichols: Yeah. The European Union's rules regarding artificial intelligence really protect individual dignity. Whereas the rules for the use of AI in North America, particularly the United States, are much looser. Using AI to investigate one of your employees, are they accumulating wealth surreptitiously? You wouldn't be able to do that in Europe. And if you're a transnational firm, you really need to comply with Europe's [regulations]. They're like the gold standard of use of AI. There's a lot of things, like you point out, that we won't be able to dig into, investigate, etc., because that really would be an infringement on the dignity of the people who work with us and for us.

Loney: A lot of what is also asked involves up-to-date process, up-to-date software. At least right now, it seems like up-to-date in the realm of what we're talking about is AI. It's taking that next step.

Nichols: Exactly. That's my concern. I have no doubts that someday, new forms of artificial intelligence will be very useful. Empathetic or empathic AI, intuitive AI. But right now, most of our AI is either in the form of large language models or is self-learning and therefore can categorize and make a sense of large masses of unsorted data. Neither one of those right now fits the things we need to do with corruption. The notion that, “Oh, we adopt artificial intelligence and therefore we're up to date and doing the best we can,” really doesn't work when we get into the weeds and think we want to solve these problems, not just put a little sticker on that says, “Hey, we’re using AI.”

Loney: Are there components of what is being done now to try to prevent corruption where you see AI could have a level of impact?

Nichols: Absolutely. Two areas. One is in generally warning people. We call that red flags. Sorting data. Yeah, your firm does this transaction — this action or whatever— a particular way for thousands of times, And then there's one that's different. Well, that's a red flag. It doesn't mean it's necessarily misbehavior, corrupt. It just means that someone should take a look at it and see why it's different. AI is great for that.

The other way that AI is really useful is when we do have huge masses of data. So the Panama Papers, Pandora leaks, etc. Or an individual firm. Siemens, Walmart. Right? These are incidents of corruption that did yield massive amounts of data. And there, artificial intelligence can be useful.

In general, if we're talking about very large firms, particularly firms that have experienced corruption in ways that can be captured, AI is great. When we're talking small to medium firms that don't generate a lot of data, AI can be counterproductive. It can it can yield hallucinatory responses that could hurt individual people or hurt transactions or relationships.

Loney: When you think about the rules that have been put in via Europe around GDPR in the last decade or two, they were seen to be at the forefront of the move to try and protect personal data. Is there a way that you could see a region of the world finding a path sooner rather than later to implement AI in corruption?

Nichols: I think we're far away from it in general. Your question is a really interesting question, and it's a question that goes beyond just compliance or detection of misbehavior. Europe, China and the U.S. all have very different approaches. Which one of the approaches is going to yield the most accurate detection of misbehavior? I think there's an argument to be made for Europe's approach. Because unlike our use of, for example, AI in hiring right now, Europe's approach might lead to a better model, which is half of what makes AI work. Model and data. And Europe's approach might lead to a better model.

It's a really interesting question that we don't really know enough about. We're just a few years into this transition, but there's a good argument for Europe. There's also a good argument for the U.S., in that just letting people run wild and do whatever they want to with AI might yield the magic bullet. too.

AI’s Impact on Policy and Regulation

Loney: When you think about how regulation plays a role in how companies operate, the unknown of AI that we still have in a lot of areas really puts us in an unknown in terms of how it would impact with regulation as well.

Nichols: Yeah. The thing that I'm most concerned about, the thing I wanted to experiment with and play with was, since AI seems to be the magic Band-Aid for everything in the world, you've got to have AI or you're behind. Is that true right now? And the answer I came up with is no. There's plenty of areas where AI is either not helpful or detrimental, and compliance regulation is one of them.

Loney: Is that going to lead us to rewriting policy and regulation in some of these areas?

Nichols: Yeah. I imagine, just like with the Industrial Revolution, rules changed because the world we work in and the world we live in has changed. In a number of years, the regulations, the rules for business behavior, will be written with an eye toward the fact that most businesses incorporate artificial intelligence, because by then we'll have a more realistic understanding [of it].

Loney: In the times that you and I have talked about these issues, I think it's fair to say when you think about corruption, you're thinking about it at the business level. At the firm level.

Nichols: I do. Absolutely. I'm here at Wharton.

Loney: When you're thinking about bribery, it's more of one person to one person, two people connection.

Nichols: Yes.

Loney: How does that difference potentially impact the thought process of using AI to mitigate the issues around bribery?

Nichols: If we go to the other end and talk about the use of artificial intelligence in government, in business-to-business transactions, there is great potential for abuse. There's great potential, particularly since AI can leverage at huge scales, for all kinds of misconduct. But there's also great opportunity for reducing almost to nothing that kind of human interference and discretion that can lead to corruption.

The digital platform infrastructure in India, the Aadhaar, are great examples of technology. We're not even talking self-learning or whatever the AI bucket is. But where technology has done exactly what you describe and significantly reduced low-level corruption, significantly increased the amount of money that people are entitled to from various government distributions that they're getting at the village level. It’s not the compliance end, but the government operation and regulatory end. Yeah, there's potential for abuse. But there's also great potential for cleaning up corruption, which is a really neat story.

Loney: What would some of the policies or rules be in order to have that kind of effectiveness?

Nichols: One is giving people digital IDs. We kind of jerry-rig that in the United States and in North America right now. We use somebody's cell phone number, the email address. But if you look at places like Estonia, where once they re-achieved independence, digital IDs opened up so much for working with technology, which eventually will include self-learning and artificial intelligence. You look at Aadhaar in India. Once you have that digital ID, even at the village level, it opened up access to the kinds of things that you're talking about. So, that's one thing that countries are leapfrogging those of us who still use, “What's your mobile phone number?” That's just a really simple kind of thing.

Now, how one approaches the mass leveraging, the depersonalization, all of these things, with still some kind of justice and fairness? That's something that in the Industrial Revolution, it took them 80 years at least to figure out. Hopefully we'll be a lot faster. But we need to see what intuitive AI looks like, what empathic AI looks like, before we start developing those kinds of regulations.

Loney: Maybe I'll throw in the cynic in me a little bit. But don't you also have to assume that, if we are looking at ways to potentially implement AI to prevent this type of criminal activity, the criminals are out there looking for ways to use the AI include this activity?

Nichols: Absolutely. Without question. The leveraging gives them huge potential for misconduct. The ability to crack other previously secure forms of communication or interaction. Huge potential. So, there's no question. It's not that AI doesn't work now. But even in the future, it's not a magic bullet. It's a tool that we need to use wisely. We can never think of it as just, “Well, we're done now. AI is taking care of everything.” Yeah, it may become sentient. It may become whatever, whatever, whatever, but it's still a tool, not a magic bullet.

Loney: And I'll finish on this. Because even with the implementation of AI in so many firms right now, the expectation is you still have to have the human component in there as part of it.

Nichols: Absolutely. Without question.