Artificial intelligence (AI) technologies hold big promise for the financial services industry, but they also bring risks that must be addressed with the right governance approaches, according to a white paper by a group of academics and executives from the financial services and technology industries, published by Wharton AI for Business.

Wharton is the academic partner of the group, which calls itself Artificial Intelligence/Machine Learning Risk & Security, or AIRS. Based in New York City, the AIRS working group was formed in 2019, and includes about 40 academics and industry practitioners.

The white paper details the opportunities and challenges of implementing AI strategies by financial firms and how they could identify, categorize, and mitigate potential risks by designing appropriate governance frameworks. However, AIRS stopped short of making specific recommendations and said that its paper is meant for discussion purposes. “It is critical that each institution assess its own AI uses, risk profile and risk tolerance, and design governance frameworks that fit their unique circumstances,” the authors write.

“Professionals from across the industry and academia are bullish on the potential benefits of AI when its governance and risks are managed responsibly,” said Yogesh Mudgal, AIRS founder and lead author of the white paper. The standardization of AI risk categories proposed in the paper and an AI governance framework “would go a long way to enable responsible adoption of AI in the industry,” he added.

Potential Gains from AI

Financial institutions are increasingly adopting AI “as technological barriers have fallen and its benefits and potential risks have become clearer,” the paper noted. It cited a report by the Financial Stability Board, an international body that monitors and makes recommendations about the global financial system, which highlighted four areas where AI could impact banking.

The first covers customer-facing uses that could expand access to credit and other financial services by using machine learning algorithms to assess credit quality or to price insurance policies, and to advance financial inclusion. Tools such as AI chatbots “provide help and even financial advice to consumers, saving them time they might otherwise waste while waiting to speak with a live operator,” the paper noted.

“It starts with education of users. We should all be aware of when algorithms are making decisions for us and about us.” –Kartik Hosanagar

The second area for using AI is in strengthening back-office operations, including developing advanced models for capital optimization, model risk management, stress testing, and market impact analysis.

The third area relates to trading and investment strategies. The fourth covers AI advancements in compliance and risk mitigation by banks. AI solutions are already being used for fraud detection, capital optimization, and portfolio management, the paper stated.

Identifying and Containing Risks

For AI to improve “business and societal outcomes,” its risks must be “managed responsibly,” the authors write in their paper. AIRS research is focused on self-governance of AI risks for the financial services industry, and not AI regulation as such, said Kartik Hosanagar, Wharton professor of operations, information and decisions, and a co-author of the paper.

In exploring the potential risks of AI, the paper provided “a standardized practical categorization” of risks related to data, AI and machine learning attacks, testing, trust, and compliance. Robust governance frameworks must focus on definitions, inventory, policies and standards, and controls, the authors noted. Those governance approaches must also address the potential for AI to present privacy issues and potentially discriminatory or unfair outcomes “if not implemented with appropriate care.”

In designing their AI governance mechanisms, financial institutions must begin by identifying the settings where AI cannot replace humans. “Unlike humans, AI systems lack the judgment and context for many of the environments in which they are deployed,” the paper stated. “In most cases, it is not possible to train the AI system on all possible scenarios and data.” Hurdles such as the “lack of context, judgment, and overall learning limitations” would inform approaches to risk mitigation, the authors added.

Poor data quality and the potential for machine learning/AI attacks are other risks financial institutions must factor in. The paper delved further into how those attacks could play out. In data privacy attacks, an attacker could infer sensitive information from the data set for training AI systems. The authors identified two major types of attacks on data privacy — “membership inference” and “model inversion” attacks. In a membership inference attack, an attacker could potentially determine if a particular record or a set of records exist in a training data set and determine if that is part of the data set used to train the AI system. In a model inversion attack, an attacker could potentially extract the training data used to train the model directly. Other attacks include “data poisoning,” which could be used to increase the error rate in AI/machine learning systems and distort learning processes and outcomes.

Making Sense of AI Systems

Interpretability, or presenting the AI system’s results in formats that humans can understand, and discrimination, which could result in unfairly biased outcomes, are also major risks in using AI/machine learning systems, the paper stated. Those risks could prove costly: “The use of an AI system which may cause potentially unfair biased outcomes may lead to regulatory non-compliance issues, potential lawsuits, and reputational risk.”

Algorithms could potentially produce discriminatory outcomes with their complexity and opacity. “Some machine learning algorithms create variable interactions and non-linear relationships that are too complex for humans to identify and review,” the paper noted.

Other areas of AI risks include how accurately humans can interpret and explain AI processes and outcomes. Testing mechanisms, too, have shortcomings as some AI/machine learning systems are “inherently dynamic and apt to change over time,” the paper’s authors pointed out. Furthermore, testing for “all scenarios, permutations, and combinations” of data may not be possible, leading to gaps in coverage.

“We need a national algorithmic safety board that would operate much like the Federal Reserve….” –Kartik Hosanagar

Unfamiliarity with AI technology could also give rise to trust issues with AI systems. “There is a perception, for example, that AI systems are a ‘black box’ and therefore cannot be explained,” the authors wrote. “It is difficult to thoroughly assess systems that cannot easily be understood.” In a survey AIRS conducted among its members, 40% of respondents had “an agreed definition of AI/ML” while only a tenth of the respondents had a separate AI/ML policy in place in their organizations.

The authors flagged the potential for discrimination as a particularly difficult risk to control. Interestingly, some recent algorithms helped “minimize class-control disparities while maintaining the system’s predictive quality,” they noted. “Mitigation algorithms find the ‘optimal’ system for a given level of quality and discrimination measure in order to minimize these disparities.” 

A Human-centric Approach

To be sure, AI cannot replace humans in all settings, especially when it comes to ensuring a fair approach. “Fair AI may require a human-centric approach,” the paper noted. “It is unlikely that an automated process could fully replace the generalized knowledge and experience of a well-trained and diverse group reviewing AI systems for potential discrimination bias. Thus, the first line of defense against discriminatory AI typically could include some degree of manual review.”

“It starts with education of users,” said Hosanagar. “We should all be aware of when algorithms are making decisions for us and about us. We should understand how this might affect the decisions being made. Beyond that, companies should incorporate some key principles when designing and deploying people-facing AI.”

Hosanagar has listed those principles in a “bill of rights” he had proposed in a book he wrote titled A Human’s Guide to Machine Intelligence. They include:

  • A right to a description of the data used to train users and details as to how that data was collected,
  • A right to an explanation regarding the procedures used by the algorithms expressed in terms simple enough for the average person to easily understand and interpret, and
  • Some level of control over the way algorithms work that should always include a feedback loop between the user and the algorithm.

Those principles would make it much easier for individuals to flag problematic algorithmic decisions, and ways for government to act, Hosanagar said. “We need a national algorithmic safety board that would operate much like the Federal Reserve, staffed by experts and charged with monitoring and controlling the use of algorithms by corporations and other large organizations, including the government itself.”

Evolving Regulatory Landscape

Hosanagar pointed to some of the important mile markers on the regulatory landscape for AI:

The Algorithmic Accountability Act, proposed by Democratic lawmakers in Spring 2019, would, if passed, require that large companies formally evaluate their “high-risk automated decision systems” for accuracy and fairness.

The European Union’s GDPR (General Data Protection Regulation) audit process, while mostly focused on regulating the processing of personal data by companies, also covers some aspects of AI such as a consumer’s right to explanation when companies use algorithms to make automated decisions.

While the scope of the right to explanation is relatively narrow, the Information Commissioner’s Office (ICO) in the U.K. has recently invited comments for a proposed AI auditing framework that is much broader in scope, said Hosanagar. The framework is meant to support ICO’s compliance assessments of companies that use AI for automated decisions, he added.

That framework has identified eight AI-specific risk areas such as fairness and transparency, accuracy and security, among others. In addition, it identifies governance and accountability practices including leadership engagement, reporting structures, and employee training.

Building accurate AI models, creating centers of AI excellence oversight and monitoring with audits are critical pieces in ensuring against negative outcomes, the paper stated. Drawing from the survey’s findings, the AIRS paper concluded that the financial services industry is in the early stages of adopting AI and that it would benefit from a common set of definitions and more collaboration in developing risk categorization and taxonomies.

Learn more: Visit Wharton AI for Business.