Kevin Werbach, Wharton professor of legal studies and business ethics, explores the goals, limits, and broader national context of California’s newly enacted AI child-protection bill and what it signals for future regulation and industry responsibility.

Transcript

Following is an edited transcript of the conversation.

Dan Loney: Recently, California Governor Gavin Newsom signed a bill designed to strengthen the protections for children in the online space. The focus of the bill is around AI and chatbots, and how they interact with kids. To discuss the bill and its impact, we have Kevin Werbach, professor of legal studies and business ethics at the Wharton School.

This is a very important topic when we’re thinking about the development of AI, and making sure that we have the protections necessary out there. As this bill is signed and it’s now being put into action, what are your thoughts about the importance of having something like this?

Kevin Werbach: There is a whole series of AI-related laws that are under consideration in states throughout the U.S., and especially in California, which is where this particular bill was adopted. California has been the central battleground because it’s such a big, influential state and because Congress has not moved forward on federal legislation [on AI].

This particular bill comes about at a time where there is growing concern about AI companions and about children in particular being impacted, in some cases causing self-harm or suicide because of their interactions with AI chatbots and companions.

The bill actually does less than it sounds like at first, but at least it tries to put a stake in the ground of saying that the state of California wants to put obligations on companies, especially the ones that are knowingly providing these services to children.

Loney: Whether or not we’re talking about this from the federal perspective or the state level, and it seems like what we need to see is a lot more push from the state level, because the federal level just isn’t able to move a significant piece of legislation forward to wrap the entire country in this.

Werbach: This has been the subject of tremendous debate. There was a push a few months ago to have a federal moratorium where the federal government would prohibit states from regulating AI for 10 years, out of concern from conservatives and from the AI companies that 50 states regulating in complex and inconsistent ways will be a real burden on innovation. There’s some truth to that concern, but there’s also some truth to the concern you raise, that these are real issues.

States are the laboratories of democracy. They’re closer to the ground. They should be taking action to protect their citizens. Ultimately, we need a balance. It doesn’t make sense to have 50 different state laws, but it also doesn’t make sense to say we should just not do anything on these issues. So let’s look at the legislation that’s been adopted and assess it on its own merits.

Loney: I guess when you’re talking about children – we know how much the internet is used by kids on different platforms and different websites – you’re talking about individuals that are in their development process. As you alluded to, this is a very important subset of the population that you really do need to look out for.

Werbach: The challenge is: how do you look out for them? This is a debate we’ve been having increasingly in recent years about social media. There is a law called COPPA (Children’s Online Privacy Protection Rule) that’s mainly about privacy, but it has to do with online services that know they’re dealing with children.

But [COPPA] doesn’t require strict age verification. So, if you tell Facebook that you’re 13 years old, they have to do something. But if you don’t, or if you’re a 13year-old that pulls the drop-down and says, ‘I’m 20 years old,’ then potentially they’re not obligated.

We have the same issue with AI, and there are all kinds of complexities around implementing those age restrictions. But absolutely, children are a distinct category. And the AI companies themselves – OpenAI, Anthropic and so forth – have acknowledged that they need to step up more.

There are issues even with adults, with what’s called AI psychosis that [affects] some small percentage of people; it actually does drive them crazy. And so, we need to address that.

But with kids, definitely, whether it’s mandated by law or just appropriate good governance and actions by the companies, they need to take some steps to address the issues with kids using these technologies.

Curbing Innovation?

Loney: But there are some out there that also say that if you have a higher regulatory framework around things like the internet, it’s a way to cut back on innovation. How do how do you respond to those comments?

Werbach: It’s not all or nothing. That’s what I said before. People always want to say this law will stop innovation and that it will have a chilling effect. And that’s a legitimate concern. But it’s a problem to say that anytime any law gets passed.

The reality is all these companies are investing literally trillions of dollars building this AI infrastructure because they’re competing against each other; they’re [also] competing against China. They believe they are building something that is going to be absolutely foundational to not just the future of the internet, but the future of business and the world.

So, yes, there’s an impediment and some costs that they have to put some guardrails on for kids. But I really don’t think companies like OpenAI and Google and so forth are going to shut down what they’re doing just because of those costs.

Loney: Where do you think we stand right now? And maybe even more so, where do you think we’re heading in terms of a regulatory framework on the internet, but also around AI?

Werbach: Well, that’s a big question, but honestly, the big pressure is not regulation. The big pressure is liability. One of the things that this law – Senate Bill 243 -- does is it creates private liability if companies are required to take steps to protect kids and fail to do so. But even without that law, there are a number of lawsuits that have already been filed by parents of kids that committed suicide after using some of these AI chatbot and companion tools, and they’re doing it under general principles of tort law. And so, I think regulation is important.

But again, at the end of the day, most of these major companies acknowledge they need to take appropriate steps. Most of the enterprises I’m mainly working with that are deploying AI acknowledge they need appropriate AI governance. And the big club that they’re worried about is a lawsuit that’s going to have massive amounts of damages.