The versatility of AI applications and their rapidly expanding user base have focused the spotlight on regulators and the need to provide guardrails for the technology. Wharton experts delved into that and related topics in a recent session of a new Wharton series called “Policies That Work.”
Stefano Puntoni, Wharton marketing professor and co-director of AI at Wharton, moderated the panel on “AI, Technology, and the Role of Regulation” with Kartik Hosanagar, Wharton professor of operations, information and decisions, and Kevin Werbach, Wharton professor of legal studies and business ethics and department chair.
Watch the full panel or read some key takeaways below:
Major Concerns for Policymakers
Werbach put the major concerns with AI into three buckets: safety, governance, and abuses. On safety, advanced AI models, especially foundation models for generative AI, could cause catastrophic harm. While the potential for them to become “super intelligent and a threat to humanity” is remote, he said, “there are certainly possibilities we could see mass casualties and attacks on critical infrastructure — really severe harms.”
Accuracy, bias, privacy, intellectual property, and transparency are other major concerns. The prevalence of bias is worrisome, for instance, when AI is used for sensitive decisions like hiring or lending, and there are challenges to transparency in that decision-making. Governance would also have to address issues such as data privacy and concentration of market power. Abuses could take the form of deliberate misuse of AI systems, such as misinformation, deepfakes, and cybersecurity abuses.
Deepfakes and Biometrics: Two Urgent Issues
Hosanagar picked deepfakes as the biggest issue that warrants regulatory attention. “Deepfakes can create civil unrest, and democracies should be worried about it.” Cybercrimes, especially in financial fraud, and pornography are other scenarios for deepfakes.
According to Werbach, “The biggest gaps [are in areas where] we don’t have any structure that’s addressing serious problems.” Generative AI applications such as ChatGPT, Character.ai, and other general-purpose large language models (LLMs) “can be incredibly valuable and useful, but also potentially dangerous.” He pointed to cases where active users have committed suicide. Before detailed regulation, he recommended “appropriate guardrails.”
Another area that lacks a regulatory structure is biometrics, especially facial recognition, which has been used for flight safety or to identify shoplifters. Evidence has shown that these biometric systems “can be highly inaccurate and highly biased,” such as against darker-skinned people, if firms don’t take care to mitigate those problems, Werbach said. He pointed to a recent case involving drug store chain Rite Aid, whose facial recognition tool to prevent shoplifting came under the scrutiny of the Federal Trade Commission.
“Deep fakes can create civil unrest, and democracies should be worried about it.”— Kartik Hosanagar
Finding the Balance in Regulation
AI regulation must strike the right balance which addresses “legitimate concerns about equity, welfare, and safety, [without] stifling innovation,” Puntoni said. “The risk is if you over-regulate, then you stifle innovation,” Hosanagar said, noting that “the center of innovation” shifts away from states or countries that regulate heavily.
Hosanagar identified two approaches for AI regulation. One is that it should be issue-specific, instead of a broad-based approach that will bring “compliance headaches … to hundreds of other use cases that are harmless.” For instance, such regulation could focus sharply on, say, deepfakes and financial fraud, instead of broader fields such as LLMs or image models in general, he said.
The second approach is to create regulatory sandboxes for innovation. These sandboxes allow companies to try out their innovations, and for companies and regulators to track the relevant data, Hosanagar said. He cited the Monetary Authority of Singapore, which has a regulatory sandbox for financial use cases of AI; and Dubai, which covers emerging technologies.
AI Regulation in the U.S.
“No country in the world is really seriously trying to regulate all AI,” Werbach said. “They’re focused on use cases, or they’re focused on risk.”
The U.S. does not have a specific federal law focused on AI regulation, but existing laws cover many potential concerns, such as unfair and deceptive practices, or discrimination. Some recent initiatives do address AI governance issues. The Biden administration in October 2023 issued an executive order on “the safe, secure, and trustworthy development and use” of AI, which set out detailed requirements for government agencies to better understand the issues with the technology, and put in place procedures for investigations.
The executive order is not binding on the private sector, but it will have an indirect effect, Werbach said. “The U.S. government is the biggest purchaser of everything, including AI technologies.”
Potential Changes Under Trump
The U.S. is poised to shift gears on AI regulation from a focus on risk management under the Biden administration to efforts to “unshackle AI” in the coming Trump administration, Werbach continued. Trump’s donors and campaign participants, including Elon Musk, “think the potential is there for this technology to be many orders of magnitude better, more powerful, and more valuable for the economy and for America. And they think regulation is standing in the way.”
That said, the future course of AI regulation in the U.S. is less clear, as the Trump campaign has targeted big tech but has also spoken of unshackling companies from regulation.
“No country in the world is really seriously trying to regulate all AI. They’re focused on use cases, or they’re focused on risk.”— Kevin Werbach
AI developers are not, however, waiting for clarity. “Companies are voluntarily doing AI governance, and engaging in these standards processes, because they want the AI to be trusted,” Werbach said. “They don’t want to have a situation where they build something and it’s discriminatory, or one of their users commits suicide.”
While it is premature to speculate on the shape of federal AI laws under the Trump administration, state-level AI regulation is moving forward. Some 700 AI bills are pending in U.S. states, and dozens have been adopted, including in California and Colorado, Werbach said. One high-profile bill in California (SB 1047) was recently vetoed by Governor Gavin Newsom.
Regulation in Other Countries
Singapore, the U.K., the European Union, and China have taken concrete steps to regulate AI. Singapore, for instance, released a Model AI Governance Framework earlier this year, which Hosanagar characterized as business friendly.
The EU earlier this year passed its AI Act, which Werbach described as “comprehensive, top-down, detailed, and prescriptive.” It has a risk-based approach, where it identifies “high risk” uses of AI such as in hiring, defense, criminal justice, finance, and education. The EU is currently in “a process of standardization and engagement” with its AI Act.
China has adopted at least three enforceable laws already on AI, which are focused on recommender systems, deepfakes, and generative AI systems. China also requires approval and licensing to deploy chatbots with public-facing LLMs, preceded by a standardized and structured test.
Confronting Bias
In framing regulations to prevent bias in AI systems, policymakers must recognize that they confront issues that may be rooted in either technical, ethical, legal, or governance shortcomings. Hosanagar said bias in AI models may originate from training data; that could be mitigated by bias detection tools, or data resampling, or fixing data gaps. For instance, data is sparse for minority groups, and technical solutions for that could include data resampling or using fairness-aware machine learning approaches, he added.
Shortcomings in algorithms used for hiring or credit scorings could take on a legal dimension if they invoke anti-discrimination laws. Bias is also an ethical issue, and it raises fundamental questions such as how to define fairness, or who gets to define it. “The difficulty is when you deploy algorithms into social systems, and you have bias emerging as an interplay between algorithmic and human decision-making,” said Puntoni.
Bias can be a tricky issue to regulate. “Human decision-makers are incredibly biased, and they can hide their biases behind explanations that seem to make sense,” said Hosanagar. “It’s harder to hide AI biases. [But] the problem with AI biases is that they scale. A biased judge in a courtroom affects a few thousand lives. If you use an AI system to guide a judge’s sentencing decisions, and it is biased, it affects millions of lives.”
“The difficulty is when you deploy algorithms into social systems, and you have bias emerging as an interplay between algorithmic and human decision-making.”— Stefano Puntoni
Bias cannot be regulated away with just blanket rules. Instead, Werbach recommended toolkits for technical, ethical, and governance issues along with the evolution of standards and best practices.
Other ways to resolve accuracy issues are to have governance frameworks that incorporate explainability and human-in-the-loop requirements. For instance, if an AI system rejects a loan application, it could come with an explanation. In other settings where AI cannot be fully autonomous, human intervention could fix accuracy problems. Accuracy is also “a huge challenge,” especially when generative AI systems create hallucinations or confabulations with generative AI, Werbach said. He advised against laying down regulatory thresholds arbitrarily, noting that generative AI systems “don’t understand in any kind of human way how they relate to the real world.”
But Werbach made an exception for AI safety: “If we are worried about catastrophic harm, then we need to at least have some arbitrary threshold. It will be highly imperfect, but we need to really ensure that we mitigate these very, very severe risks.” The EU also has barriers for what it considers unacceptable uses of AI, such as emotion detection in schools and the workplace, and large scale, real time biometric scanning, he noted, although the U.S. may not draw the same lines.
Massive Energy Requirements
AI systems — especially massive, general purpose, large language model generative systems — require massive amounts of energy to be trained. Training costs for foundation models such as those of OpenAI, Google, and Anthropic can range from hundreds of millions of dollars to a billion dollars, going up to $10 billion or $100 billion “potentially, if the scaling laws hold,” Werbach said.
Data centers consume about 10% of the energy in most U.S. states. The combined energy consumption of data centers in the U.S. is roughly equivalent to three times that of New York City and is projected to triple in the next four years, Hosanagar noted.
Some companies have taken big steps to meet those energy needs. Microsoft has signed a deal with Constellation Energy to restart the Three Mile Island nuclear power plant in Pennsylvania; it was retired in 2019 because it became economically unviable. Constellation will invest $1.6 billion to restart the plant, and Microsoft has agreed to buy power from it worth about $800 million annually for 20 years. Google and Amazon have also placed orders with next-generation small modular nuclear power companies.
The push to try “every method possible to get more energy generation in the U.S … will really put pressure for innovation,” especially in nuclear power, Werbach noted. Such innovation will also focus on making AI systems more efficient, and in developing better architectures for neural networks, Hosanagar said. Regulators, too, will have an expanded role in thinking about energy security and related issues such as pricing implications for consumers, he added.
Regulation’s Impact on Competition and Collaboration
AI clearly presents a big opportunity for collaboration between companies on self-governance, standards, and best practices. One initiative is the Partnership on AI, where tech companies have come together to create pathways for AI governance and implementation.
On the competitive dimension, it is crucial to have a “a regulatory backstop” below which companies should not be able to go in ensuring the right governance, including accuracy, fairness, and transparency, Werbach said. Without those baselines, companies will “race to the bottom” just because they have to compete, he added. “We want to have a race to the top.”
“Don’t expect Silicon Valley to be the place that will take a governance-first approach.”— Kartik Hosanagar
That “race to the top” gets help from direct government funding, incentives, education, and immigration, especially in the Biden executive order on supporting AI, Werbach said. He expected the Trump administration to “double down on all of that.” Incentives where companies can game the regulatory process must be removed; instead, incentives must be set up in a way where “self-interest is actually of collective benefit,” he added.
As Puntoni saw it, companies could take proactive steps on AI governance “because they are afraid of regulation and want to prevent heavy handed [punitive measures].” Or, they could try to gain “a direct influence on regulation” through lobbying, or what is known as “regulatory capture.”
“Regulatory capture is a big concern,” Werbach said. “We need better policies, and better regulatory structures that don’t get captured.” Hosanagar suggested that open-source technologies could be an option. That approach could prevent regulatory capture because it would lower or eliminate entry barriers, and thus soften the competitive instincts in companies. On the other hand, it would be hard to regulate open-source AI models because their users would be far too many and it would be difficult to track their activities.
How Regulation Works in Practice
While regulation sets the requirements, liability is a “very powerful tool … [where] someone’s going to sue you,” Werbach noted. He cited the biometric privacy law passed in Illinois in 2008, which provided protections against unauthorized use of biometric information of individuals. “This is the most significant AI law ever passed in terms of actually influencing companies,” he said. The case forced Facebook to shut down its photo tagging technology and pay users $550 million in settlement.
With regulatory compliance, companies could get rewards such as grants and incentives, but competitive advantage is another driver of best practices. Hosanagar cited the AI-powered search engine Perplexity AI, which focused its innovation on transparency and accuracy. “It’s not because they want to do good. It’s because that gives them a competitive advantage.”
But competitive instincts alone cannot be expected to do the governance job well; strong regulation is an imperative. “Don’t expect Silicon Valley to be the place that will take a governance-first approach,” Hosanagar said.
Above all, the emphasis must be on education, according to Hosanagar. “I would require … every school to have an AI literacy program in middle school and high school. I would, in fact, require every student to create deepfakes. I would probably want every person on earth to spend a lot of time creating lots of deepfakes. Once you do it a few times, you start to realize what this is about, and you start to realize, ‘Everything I see has to be questioned.’”