The following article was written by Kevin Werbach, a professor of legal studies and business ethics at the Wharton School, a former FCC counsel, and an expert on technology regulation, law, and ethics.
AI has been a huge topic since the release of ChatGPT two years ago. Whatever one thinks about the incoming Trump administration, it’s important to recognize this reality: A big push for looser AI regulation is heading our way. An influential group of Silicon Valley figures put its full weight behind Trump’s presidential campaign, and this cohort expects its agenda to be implemented. Their mantra is to unshackle AI development in the U.S., in order to win the AI arms race with China and deliver extraordinary benefits to society.
Nonetheless, AI governance, as a topic of serious debate (and as a matter of practical implementation), is not going away. At first glance, the Trump AI posture may seem like a 180-degree turn from Biden’s. It certainly is in rhetorical style. But in practice, the Trump team’s likely AI policies show surprising continuities with those pursued by Biden’s staff.
To be sure, we shouldn’t minimize the likelihood of erratic policy moves during the next four years. But organizations can continue working toward an environment that fosters AI innovation while addressing its negative consequences through responsible and adaptive measures.
Even if we’re talking about targeted governmental regulation, the U.S. government is only a minor actor.
The first thing to realize is that the vast majority of regulation covering AI isn’t explicitly “AI regulation.” Much of it is enforcement of general-purpose rules. For example, an AI-based resumé screening tool that disadvantages minority or female candidates could run afoul of anti-discrimination laws. Other concerns are the domain of private litigation. There are many copyright and privacy lawsuits against major AI labs for their indiscriminate scraping of training data, which don’t rely on AI-specific regulation.
Even if we’re talking about targeted governmental regulation, the U.S. government is only a minor actor. States have passed dozens of AI bills, with hundreds more under consideration. Despite being formally limited in geographic scope, those laws can force companies to change their practices nationwide, especially when they originate in large states like California. And other governments, most notably the European Union, have adopted extensive AI regulations on anyone doing business within their borders or with their citizens, which sweeps in most large U.S.-based firms.
A Shift to Self-regulation
The reality is that, given the difficulty of passing legislation in Congress, the Biden administration’s regulatory approach to AI was largely oriented around “soft law”: infrastructure and incentives for companies to engage in effective AI governance themselves. These efforts, such as the National Institute of Standards and Technology’s (NIST’s) collaborations with industry stakeholders on AI risk management techniques, are likely to endure in some form, even if the formal provisions of the Biden executive order on AI are repealed, as the Trump team has proposed.
In fact, such “industry self-regulation” will be the catchphrase for AI policy in the next four years, positioned as an alternative to prescriptive government mandates. The truth is that while there are arguments against detailed governmental rules, no one suggests that AI is risk-less. Least of all the tech leaders and AI executives themselves, many of whom have been warning for years that rapid advances in technology call for serious “AI safety” initiatives to prevent catastrophic harms. And whether it’s autonomous vehicles crashing, teenagers committing suicide after immersing themselves in conversations with AI companions, or criminals pulling off thefts and scams using deepfake technology, examples of AI harms continue to pile up. Even those arguing the loudest that such failures should not distract from AI’s manifold benefits can’t dismiss that they deserve responses.
All of this is why most major AI developers have invested significant human and technical resources in AI governance and responsibility practices. It’s why industry groups such as the IAPP (the main trade association of privacy professionals) and the Future of Privacy Forum are aggressively pivoting to AI, and seeing brisk demand in industry for coordination to identify best practices and standards. As a business school professor, I see this in the demand for Wharton’s training programs and research initiatives that bring together corporate leaders around accountable AI. There are models of effective self-regulation, such as FINRA for investment broker-dealers, which can be applied to the AI context.
“Industry self-regulation” will be the catchphrase for AI policy in the next four years.
To be fair, as valuable as self-regulatory mechanisms are, there is ultimately a need for regulatory sanctions and government-defined rules, at least as a backstop. Private initiatives can fail, or some companies may ignore them, creating an unhealthy race to the bottom if that gives them a competitive advantage. But we’re past the point where the alternative to regulation is an anything-goes “Wild West” for AI. Companies know that users will not trust and adopt these technologies if they keep experiencing or hearing about dangerous failures.
Spectacular failures such as cybersecurity breaches, or AI systems in healthcare killing patients by hallucinating false treatment recommendations, could bring industrywide reputational harm. That’s why China, America’s competitor for AI supremacy, actually has more extensive AI regulation than the U.S. today. And with AI adoption rippling across the entire economy, the AI sector is not just Silicon Valley tech firms. Regulated businesses in massive industries such as financial services, healthcare, communications, energy, transportation, and defense will certainly push to limit government requirements, but they understand the value of governance.
All in all, there is reason to expect that even as the outward aspects of AI policy shift dramatically in the second Trump administration, there will be significant continuity on the ground. Organizations would be wise to stay focused on developing and implementing effective governance mechanisms. Whatever happens in Washington, the next four years are bound to reveal both more of AI’s potential and its dangers.