Many executives believe that all it takes for artificial intelligence to deliver great results is a combination of data and data scientists. But that is not true. Artificial intelligence is a lot more complex and there are many hurdles to be overcome, says Julien Blanchez, global head of data and analytics at SWIFT. For instance, there are operational issues, concerns around compliance and security, and ethical dilemmas. In a conversation with Knowledge at Wharton at a recent conference on artificial intelligence and machine learning in the financial industry organized in New York City by the SWIFT Institute, Blanchez discussed the power of AI, limitations in deploying it, and other related issues.
An edited transcript of the conversation follows.
Knowledge at Wharton: Could you share with us your experience of how senior executives think about AI?
Julien Blanchez: Well, this is how a practitioner explained it to me. His management was pushing him to do AI, and their argument was: “You just get a lot of data. Then you hire good data scientists. And then you do magic. So, 1+1+1= money flows.” When he told me this, my first thought was: “Yes, money does flow, but in the wrong direction. Money flows out.” AI is so much more complex than just getting lots of data and good data scientists. For instance, there is the issue of the quality of the data. There is also the technology piece that a businessperson might miss out on or get it completely wrong.
Knowledge at Wharton: Why does money flow out? What hurdles do companies face in implementing AI initiatives?
Blanchez: The hurdles are in three broad categories. The first is operational hurdles. Where do you start? With people? With data? With technology? And how does that work? The second hurdle is around compliance and security. Data has always been a sensitive issue, but it is getting increasingly more so because we now have a better understanding of how big an impact AI can have. There is increasing public concern around this, and the regulators have an opinion. You need to navigate around these new complexities in order to make it work. Finally, there is the ethical/societal question. Decision-makers, team members, other business peers are questioning whether we really want to do this. How do we solve the trolley problem, for example?
“AI is much more complex than just getting lots of data and good data scientists.”
Knowledge at Wharton: For those who haven’t encountered it before, what is the trolley problem?
Blanchez: The trolley problem is a philosophical / ethical question that dates back from a long time. [In our current context], it is about trying to frame the kind of dilemma that a machine will have to face, a dilemma that people have not yet been able to resolve in a rational way. Imagine that a driverless car is driving on a certain road and has to decide whether it should swerve left and crush a mother and her three children, or swerve right and crush an old priest, or swerve into a wall and crush its passenger. What should the machine do? How do we tell the machine to behave in this kind of scenario?
Think about the marketing pitch of the autonomous car manufacturers. They could pitch a highly ethical car. They could say: “We have the most ethical car. We’ll save all the lives, and we will crash you into the wall.” Do you think that will sell any cars? Clearly, it won’t. Or, they could say: “We will save you at all costs. We will crush anyone but you.” This will also not work.
So we are faced with these kinds of important questions. These questions have not yet been solved by philosophy, so putting this onto technology is really a long shot. At the same time, though, we also have to realize that in the meantime thousands of lives are lost every year because of careless driving by human beings. Machines could address this problem since they do not text while at the wheel, or drive under the influence of alcohol or have an argument with their teenagers or their five-year-olds in the backseat.
Knowledge at Wharton: Another ethical question that one hears about is that of bias — algorithmic bias. Often algorithms make decisions based on gender or race. In your experience, how are financial institutions thinking about these issues?
Blanchez: Finance has been used for a long time as a discriminator. As this technology advances, it will change the equation and make it impossible to discriminate on the basis of say, gender. In fact, technology will show that women are actually cross-subsidizing the lesser-equipped men in some practices. We are entering a new world where technology challenges some of the ways in which we work at present. But it also has a shot at making things fairer.
“Data has always been a sensitive issue, but it is getting increasingly more sensitive because we now have a better understanding of how big an impact AI can have.”
Knowledge at Wharton: One way in which things can become fairer is for regulators to set clear rules about what can and cannot be done. Do you see that happening with financial services, particularly banks?
Blanchez: Absolutely. The acceleration in the past few years on the research side and the learning curve of regulators has been quite impressive. I have been engaging with regulators on technical matters over the past years, and I’ve been surprised by how much this has changed. However, it does not mean that we should expect that regulators will necessarily accelerate much faster. There’s a good reason why regulations should stay maybe a step behind technology. But I’m optimistic at the speed with which regulators have been looking at things.
Knowledge at Wharton: What are some of the most promising regulatory solutions that you have seen?
Blanchez: Well, the most impressive at a global scale is the European General Data Protection Regulation (GDPR). In this case, you could even argue that they are a few years ahead [of technology]. The way these regulators have anticipated [the challenges] and have been able to impose to the world — with a lot of unknowns, because none of us knows where exactly this technology is going — is very impressive. They’ve been able to draw a few lines in the sand, which we should not cross and which have inspired a lot of the thinking worldwide. Of course, there are questions about how these regulations might hinder the growth and the economic impact. These questions are acknowledged by European thinkers and thought leaders. It is not that they are not thinking about this, but they have dealt with the priorities probably in the right order, and they’re inspiring the rest of the world.
Knowledge at Wharton: Coming back to some other hurdles that financial institutions face — the operational issues, for instance. If you are a major financial institution and you want to start implementing AI, what is the right place to start? Do you start with the data? With people? Where do you begin?
Blanchez: Yes, there are lots of options. Do you start with the data? Do you start with the people? Do you start with the process? With the technology? The way I think about it is that you need the nails before you look for the hammer. There’s no point in having a hammer and then going to look for nails. Be sure of the problem you want to solve, and you will find the adequate tool to solve that problem.
Knowledge at Wharton: Could you explain this approach with a use case?
Blanchez: A good example would be cybersecurity. If you are facing significant challenges on cybersecurity, your circumstance could be helped with these kinds of tools. You start by identifying the need, understanding what kind of technology can help you, and what kind of data you would need to improve your stance. You should move in that order as opposed to hearing that you need to hire data scientists, gather operational data, and then figuring out what to fix.
Knowledge at Wharton: The other big issue in deployment of AI concerns security and compliance. What are the key ideas that financial institutions should be aware of in that regard?
Blanchez: The biggest issue relates to the personal [information] and the identification of the individual. One element of this is privacy. Another element is the explainability of the model. These models are going to make decisions, and regulators — rightfully so — are adamant about understanding how these decisions are being made. Interestingly, technology could be part of the solution for the privacy aspect. A lot of effort is being made around privacy-enhancing and privacy-preserving –technologies.
Knowledge at Wharton: Financial services are regulated because financial information is highly private. The other kind of information that is also highly private and sensitive relates to health care. Are there any lessons that financial services could learn from the experience of the medical field and privacy of health information?
“We are entering a new world where technology challenges some of the ways in which we work at present. But it also has a shot at making things more fair.”
Blanchez: I think it goes broader than that. It’s fascinating how the financial sector thinks that they have the most critical data. My personal opinion is that it’s only money; it’s not your life. All sectors, be it health care or retail, or even aerospace and defense, sit on explosive data. It is important to treat this with utmost accountability and responsibility.
Knowledge at Wharton: One of the things we keep hearing is that if there are limitations in the data being incomplete or biased, then the outcomes will not be good. Is there any way to prevent this?
Blanchez: You need to constantly validate your models. Always make sure that your models make sense, and as a good practitioner, make sure that you maintain new models — that you don’t drift through time and through the use of your data. These are some of the best practices. This is part of what academia is doing at present — defining best practices that as practitioners we should adopt.
Knowledge at Wharton: Any final thoughts that you would like to share?
Blanchez: An important aspect that we all need to think about is that as we enter a new world with all these new AI models, how do we take care of the work opportunities and the impact on the availability of work? How will work be valued and rewarded in the future? Robots will not necessarily take over entire jobs, but they will take over the portions of each of our jobs that we are least well equipped for, which is the analytical and process driven portion. We will have much more time to allocate to creativity and most importantly the EQ (emotional quotient) related portions of work. How do we get ready for this, and how will this be compensated in the future?