With more access to data and growing computing power, artificial intelligence (AI) is becoming increasingly powerful. But for it to be effective and meaningful, we must embrace people-first artificial intelligence strategies, according to Soumitra Dutta, professor of operations, technology, and information management at the Cornell SC Johnson College of Business. “There has to be a human agency-first kind of principle that lets people feel empowered about how to make decisions and how to use AI systems to support their decision-making,” notes Dutta. Knowledge at Wharton interviewed him at a recent conference on artificial intelligence and machine learning in the financial industry, organized in New York City by the SWIFT Institute in collaboration with Cornell’s SC Johnson College of Business.
In this conversation, Dutta discusses some myths around AI, what it means to have a people-first artificial intelligence strategy, why it is important, and how we can overcome the challenges in realizing this vision.
An edited transcript of the conversation follows.
Knowledge at Wharton: What are some of the biggest myths about AI, especially as they relate to financial services?
Soumitra Dutta: AI, as we all know, is not new per se. It has been there for as long as modern computing has been around, and it has gone through ups and downs. What we are seeing right now is an increased sense of excitement or hype. Some people would argue it’s over-hyped. I think the key issue is distinguishing between hope and fear. Today, what you read about AI is largely focused around fear — fear of job losses, fear of what it means in terms of privacy, fear of what it means for the way humans exist in society. The challenge for us is to navigate the fear space and move into the hope space. By “hope,” I mean that AI, like any other technology, has negative side effects – but it also presents enormous positive benefits. Our collective challenge is to be able to move into the positive space and look at how AI can help empower people, help them become better individuals, better human beings, and how that can lead to a better society.
Knowledge at Wharton: How do you get to the “hope” space in a way that is based on reality and away from the myths and hype?
Dutta: We need to have what I term as a “people-first” AI strategy. We have to use technology, not because technology exists, but because it helps us to become better individuals. When organizations deploy AI inside their work processes or systems, we have to explicitly focus on putting people first.
This could mean a number of things. There will be some instances of jobs getting automated, so we have to make sure that we provide adequate support for re-skilling, for helping people transition across jobs, and making sure they don’t lose their livelihoods. That’s a very important basic condition. But more importantly, AI provides tools for predicting outcomes of various kinds, but the actual implementation is a combination of the outcome prediction plus judgment about the outcome prediction. The judgment component should largely be a human decision. We have to design processes and organizations such that this combination of people and AI lets people be in charge as much as possible.
There has to be a human agency-first kind of principle that lets people feel empowered about how to make decisions, how to use AI systems to make better decisions. They must not feel that their abilities are being questioned or undercut. It’s the combination of putting people and technology together effectively that will lead to good AI use in organizations.
“The key issue is distinguishing between hope and fear…. The big challenge for us is to navigate the fear space and move into the hope space.”
Knowledge at Wharton: That’s an ambitious vision — of being people-centric in the way you think about AI. What are some of the challenges involved in realizing this vision?
Dutta: Some are technological challenges, and some are organizational challenges. In terms of technology, AI systems are broadly defined in two categories. First, there are the traditional ruled-based systems, systems that are based on if/then kinds of rules. These are much easier to integrate, partially because they can be explained logically, in human, understandable terms.
The second category, which is much more exciting — and which has had some of the most impressive results — involves the application of deep learning, neural networks, and other kinds of related technologies. These technologies, unfortunately, are still largely black boxes. It’s hard to explain the complex mathematics inside these boxes and why they come up with some outcomes and not others. Given the lack of transparency, it sometimes becomes hard for human beings to accept the outcome of the machines. Introducing more transparency into the prediction outcomes of AI systems is the technological challenge that many scientists around the world are trying to address.
The organizational challenges are equally complex, if not bigger. How do you design work systems and work processes that leverage the best of people and machines? This requires the ability to not just blindly follow the path of automation because it seems the most cost-effective way to handle things, but to also have the patience to understand how to redesign jobs. Jobs have to change as you implement AI systems inside organizations. That requires the ability to support people as they make transitions and to invest in their development and re-skilling.
AI systems, like many technological systems, provide additional support to people. This support has to make people feel more empowered. If systems don’t make people feel better about what they do, they will fail in terms of getting acceptance in the organization. So there are a lot of human issues and managerial issues involved in making sure that companies present a people-centric or a people-first approach to AI.
Knowledge at Wharton: If we look at the broad range of financial services and banking, in which sectors do you see the most disruption — or maybe the most innovation — through AI?
Dutta: The whole financial sector is ripe for the application of AI, because finance is extremely data-intensive. It has been on the forefront of technology applications. I would argue that AI could transform every single decision-making process in the finance sector because you have volumes and volumes of data. Traditionally, financial organizations primarily operated with only financial data. But now they are able to integrate social behavior as well as social media data. They can combine the human social side, the behavioral side, with the financial side. The complexity of data has increased tremendously, so how do you handle that kind of data complexity? AI has the best set of tools to handle this.
What I see at present is that financial organizations are experimenting. We’re trying to understand how to apply AI creatively and productively. One should not forget that this phase of applying AI in organizations is relatively new. Even in the case of leaders like Amazon and Google, it was only about seven or eight years ago that these organizations decided to focus in an important way on AI. There was a process of experimentation and building strength in R&D and also exploring what can make sense. That process of building big strength, of trying to explore ideas with AI, is only now starting in the financial sector. So we have not yet seen the full impact of AI in finance. We’re just starting out on this long path.
“It’s the combination of putting people and technology together effectively that will lead to good AI use in organizations.”
Knowledge at Wharton: There’s an interesting debate going on between how the U.S. or Western financial institutions are using AI relative to Chinese companies. In this competition between the U.S. and China, who do you think will take the lead in innovation and AI?
Dutta: It’s important to first understand what makes AI systems powerful. The general consensus is that AI technologies per se haven’t seen any massive breakthroughs. What has happened is much more data is available now for training AI systems. We also have much more computational power for running through different algorithms. There is also a lot more effort being spent on engineering.
Typically, when you build an AI system, it’s not a clean application where you write the algorithm and it works. Instead, you have to have 20 different models, try out 50 different data sets and look at different heuristics about what works. There is a combination of many approaches and a lot of testing that goes into obtaining an effective end outcome.
If you look at the elements that make for a successful AI system, what you see is that on data, China has a natural advantage because of its large population and the number of people doing online transactions. Chinese companies, at least the digital leaders, are sitting on enormous volumes of data that dwarf some of their American peers. This gives them an advantage. When it comes to computational resources, the U.S. has an edge in the design of advanced microprocessors and custom AI chips, though China has amassed a world-leading concentration of computing power. Engineering requires a lot of manpower to experiment, to build out systems and to try different variations. China has cheaper labor and also more manpower in terms of sheer numbers. So when we look at the three components, it’s quite likely that China is going to have an edge.
Data privacy is another big issue. In the U.S., there is some clarity on who owns the data and how it can or cannot be used. In China, that’s unclear as of now. Is it the company that owns the data? What kind of access does the government have to it? Who can use the data? So Chinese AI companies might have some challenges when it comes to their international growth.
Knowledge at Wharton: Is use of data an area where international regulations can play a role?
“Given the lack of transparency, it sometimes becomes hard for human beings to accept the outcome of the machines.”
Dutta: Yes and no. Things are moving so fast that regulators, in general, are behind. The European Union is probably the best-known example of a region trying to regulate data users. They’ve done some good work with the GDPR (General Data Protection Regulation). Some states like California are putting data privacy regulations in place. But I think it’s important to have a coordinated approach.
The ultimate goal should be that the customer owns the data and the customer should be able to decide who uses the data and under what conditions. But we are far away from that. The reality is that the large companies in the world — it doesn’t matter which part of the world they come from — have enormous power. Most people don’t read the fine print when they sign user agreements. The balance is tilted in favor of large private players. Regulation is behind, and customers don’t have the tools to manage the data themselves. We have a turbulent period ahead of us.
The big players — who have enormous data stores — will be reluctant to give it up because a lot of their competitive advantage is based on that. Unless there’s strong regulation or a vigorous consumer backlash, it’s not going to happen. I don’t see a strong consumer backlash happening because consumers are seduced by free applications and the convenience factor. Who’s going to give up Google search? Who’s going to give up other free services? People are increasingly accepting the fact that they’re losing control over their data in return for free services. So, regulation is probably the best-case scenario, but again, regulators have been relatively behind in most countries.
Knowledge at Wharton: One major question that keeps coming up about AI is what will happen to jobs — especially at the lower levels in organizations. How can the people question be dealt with in this regard?
Dutta: This is probably one of the biggest questions facing policy and society. What will be the impact of technology on jobs? If you look at the last 100 years, with the exception of the Great Depression in the U.S., the U.S. growth rate has been relatively constant, and unemployment has been relatively within a fixed range. Many people argue that technology has come and gone, but the U.S. industry has somehow adjusted. Yes, there have been shifts, people have lost jobs but they have also gained jobs. On the whole, employment has grown.
The big question in front of us today, to which we don’t have a clear answer, is what will happen with AI now? AI is different in the sense that it does not just look at automating — or potentially automating – lower-end jobs but also higher-end jobs. In medical domains, for example, many AI systems perform at a higher level than the best doctors.
Clearly, machines will do some jobs entirely or very substantially. We have to decide how to handle the people who are displaced. It is an issue of organizational leadership and policies, and of national initiatives and regulation. It is an issue of how you support the growth of new industries. If AI is allowing the growth of new industries, is the economy flexible and entrepreneurial enough to support that growth?
“If you have two partners who need to coexist, and one has some limits while the other does not have any limits, then how do you handle that merger?”
There are a lot of micro and macro issues in terms of supporting change, allowing new sectors to flourish, enabling people to learn new skills, and so on. That’s what makes the whole thing so complicated. It’s a challenge that we have to face collectively, because if you don’t get it right, there will be massive dislocations in society. The issues are solvable, but they have to be solved with collective action and determined leadership.
Knowledge at Wharton: If you gaze into your crystal ball, what do you see coming down the road?
Dutta: The next five to 10 years are going to be very important in determining how AI is used in society. The impact of AI will play out over several decades. That’s one reason why universities like Stanford are publicly committed to studying the impact of AI for the next 100 years. We have to start understanding what the possible implications are. In many cases, we don’t know what we don’t know about AI. What will the impact be? How will people react when systems become more intelligent, when they are capable of more autonomous learning? How will their behavior change? And especially if these machines are black boxes, how will we understand the interactions of the machines with human beings in society?
I don’t want to make it sound like science fiction, but it’s an important question ahead of us. Ultimately, we have to have people and machines coexisting in an effective manner. The challenge is that the machines’ capability is increasing — in some areas exceeding human beings — and potentially with no upper limits. That, I think, is the interesting challenge. If you have two partners who need to coexist, and one has some limits while the other does not, then how do you handle that merger?