As world leaders last week raised fears over runaway AI on the scale of a nuclear war or a pandemic, a more immediate and tangible frontier may well be the capital markets. The potential for AI technologies in capital markets to cause unintended effects arises when autonomous AI algorithms learn to act in concert automatically, either through a “price-trigger mechanism” that punishes deviations in trading behavior or through homogenized learning biases among algorithms, according to new research by experts at Wharton and elsewhere.

“Informed AI traders can collude and generate substantial profits by strategically manipulating low order flows, even without explicit coordination that violates antitrust regulations,” warned a research paper, titled “AI-Powered Trading, Algorithmic Collusion, and Price Efficiency,” by Wharton finance professors Winston Wei Dou and Itay Goldstein, and Yan Ji, professor of finance at the Hong Kong University of Science and Technology.

Quantitative hedge funds and leading investment firms like BlackRock and J.P. Morgan are already using AI, and that trend is gathering momentum across the financial markets. The SEC has recently given the green light to Nasdaq’s AI trading system, which utilizes reinforcement learning (RL) algorithms for making real-time adjustments. Wall Street has been clear so far of a scandal over AI-powered abuses, but the threat of that is palpable, according to Dou.

Red Flags on AI in Retailing, Manufacturing

Dou pointed to the Federal Trade Commission’s recent lawsuit accusing Amazon of using a secret algorithm to manipulate prices. “AI collusion in retail markets could drive prices to super-competitive levels as the algorithms learn to achieve and maintain coordination without any form of agreement, communication, or even intention,” he said. “Retailers and manufacturers have an incentive to gain market power without improving their product qualities. That’s why antitrust regulators are very nervous about this.”

“We have seen the rise of adoption of AI trading in financial markets, so naturally we would ask similar questions — whether AI collusion will arise in the financial markets.”— Winston Dou

Dou said their paper addresses those very concerns. “We have seen the rise of adoption of AI trading in financial markets, so naturally we would ask similar questions — whether AI collusion will arise in the financial markets,” he said. “If that’s the case, the question is whether we will see important adverse real consequences. If there’s AI collusion in the financial markets, market liquidity and price informativeness may be hurt.” Put another way, he said the worry is whether the markets will effectively facilitate liquidity and if market prices will reflect “real fundamental information.”

Goldstein noted that the Amazon case apart, there is an increasing worry of AI collusion in several other markets. “Our main question is whether something like that might be happening or could happen in financial markets. Financial markets generate another type of environment with their own nuances and complications.”

The Broad Reach of Financial Markets

Goldstein said he and his co-authors focused on financial markets for potential bad outcomes of AI-powered collusion because of the broader impact they have. “The way prices are formed in financial markets ends up having a real effect,” he said. “Firms rely on financial markets to a large extent (such as to raise capital), and so we need to understand the price formation process.”

Another reason the paper’s authors picked the financial markets is because of a paucity of research on their specific concerns. “There’s no scientific study on the outcomes and how AI trading would affect the market efficiency, including factors like price informativeness, market liquidity, and mispricing,” Dou said. The authors stated: “Our paper is one of the first few that study how the widespread adoption of AI-powered trading strategies would affect capital markets.”

The paper noted that the “integration of algorithmic trading and reinforcement learning, known as AI-powered trading, has significantly impacted capital markets.” Drawing from that observation, the authors created a virtual laboratory where they could study the effects of collusion between autonomous, self-interested AI trading algorithms. They developed “a model of imperfect competition among informed speculators with asymmetric information to explore the implications of AI-powered trading strategies on informed traders’ market power and price informativeness.”

A Lab to Study AI Behavior

Dou said their laboratory captures in a transparent manner the important features of the real financial market such as information asymmetry, price impact, and price efficiency. Within this laboratory, they ran trading algorithms to study their behavior and assess their influence on market liquidity and the informativeness of prices.

According to the paper, algorithmic collusion arises from two mechanisms: collusion through homogenized learning biases and collusion through punishment threat as in a price-trigger strategy. Biased learning, also known as “artificial stupidity,” arises because of insufficient learning about play at off-the-equilibrium-path information sets. Such learning biases are homogenized among AI traders due to the shared foundational models upon which they are developed. The collusion through the threat of punishment occurs to deter members of a cartel from breaking away, or “deviate from tacitly agreed upon behavior,” Dou explained.

“What we were looking for is implicit collusion that occurs between machines. They come to behave in a way that is difficult to detect.”— Itay Goldstein

In product markets such as the OPEC oil cartel, members can monitor deviations from agreed-upon behavior by tracking prices and volumes, and then hand out punishments to cartel-breakers such as blocking them from profitable deals. But high-frequency trading in the financial markets makes it difficult to monitor cartel-breakers. That is where AI-powered trading algorithms can learn to automatically trigger penalties for deviant behavior that market prices may reveal, Dou said. “Such collusion will incentivize all AI algorithms to stay within well-behaved trading strategies. No one will trade too aggressively relative to others.”

The upshot of that is that price informativeness will be hurt, due to market manipulation. “In a market with prevalent AI-powered trading, price efficiency and informativeness can be compromised due to both artificial intelligence and stupidity,” the paper noted.

Understanding the Psychology of Machines

In the lab they created, the paper’s authors became detectives looking for ways in which AI-powered trading algorithms might learn to collude without being detected. “What we were looking for is implicit collusion that occurs between machines,” Goldstein said. “They come to behave in a way that is difficult to detect. And that’s what we tried to figure out through this paper.” Added Dou: “The collusion automatically happens, even when each machine is 100% autonomous without any communication or intention of coordination.”

The lab studies showed the conditions under which collusion thrives. “Collusion through punishment threat (artificial intelligence) only exists when price efficiency and information asymmetry are not very high. However, collusion through homogenized learning biases (artificial stupidity) exists even when efficient prices prevail or when information asymmetry is severe,” the paper stated.

In order to study the way collusion among AI-powered trading algorithms can occur, the authors have to understand how machines think, so to speak. “Comprehending the dynamics of capital markets with the prevalence of AI-powered trading algorithms requires insights into algorithmic behavior akin to the ‘psychology’ of machines,” the paper stated.

What the Study Found

The study’s main findings included:

  • Informed AI speculators can collude and achieve supra-competitive profits by strategically manipulating excessively low order flows, even in the absence of agreement or communication that would constitute an antitrust infringement.
  • In scenarios where so-called “preferred-habitat investors play a substantial role in price formation, resulting in prices that are not highly efficient, tacit collusion among informed AI-powered speculators can be sustained through the use of price-trigger strategies.” (Preferred-habitat investors are typically long-term and insensitive to new short-run information.)
  • How effective AI collusion is depends on the level of information asymmetry in the market: To maintain collusion via a price-trigger punishment threat mechanism (artificial intelligence), the level of information asymmetry must not be too extreme, and there should not be an excessive number of informed speculators, conditions that mirror real-world scenarios.
  • In the scenario with high price efficiency or high information asymmetry, tacit collusion between AI-powered speculators can still be achieved through homogenized learning biases, reflecting artificial stupidity.

Regulators on a Vigil

Regulators are on high alert. Security and Exchange Commission (SEC) chair Gary Gensler recently cautioned against “the possibility of AI destabilizing the global financial market if big tech-based trading companies monopolize AI development and applications within the financial sector,” the paper noted. “[Regulators] have repeatedly highlighted the potential for AI to inadvertently amplify biases that could lurk in their designers, further jeopardizing competition and market efficiency.”

The findings of the paper serve as an early warning signal to both investors and regulators who want to prevent price distortions — and the broader implications of such distortions on the capital markets. But more research is required to draw insights that weigh both the good and bad outcomes of AI power, Goldstein said. “If you want to think about whether overall, AI technologies are helping or hurting the discovery of information through prices, broader investigation is needed for that. Our study brings to light one potential adverse effect.”