A Supreme Court ruling last week handed Democrats a win over Republican claims that the White House coerced social media companies into censoring conservative content, but Wharton marketing professor Pinar Yildirim said the fight is far from over.
She thinks the longstanding debate over the moderation of social media content will rage on, especially during this divisive election year.
“The debate will continue because we do not have current guidelines or laws to regulate content, and we’ll see potentially more debate as we get closer to the election,” she said during an interview with Wharton Business Daily. (Listen to the podcast.) “It’s debated among consumers, it’s debated among policymakers, it’s debated among advertisers, it’s debated among different parties.”
Control is at the heart of the issue, said Yildirim, who studies social networks and their influence on society. Who decides what constitutes misinformation or disinformation? Who decides what kind of content needs moderating? And how should that moderation be done?
Social Media and the Free Market
In the case before the Supreme Court, state leaders in Missouri, Louisiana, and five individual social media users filed a lawsuit in a lower court, accusing the Biden administration of violating the First Amendment by conducting a campaign of coercion to force platforms to take down content related to public health and elections. The Supreme Court ruled 6-3 that the plaintiffs had no legal standing to bring the case because they failed to provide sufficient evidence to back up their claims. Writing for the majority, Justice Amy Coney Barrett said that although the government played a role in some of the moderation decisions, the platforms “exercised their own judgment.”
Yildirim said the decision is based on a technicality, so it doesn’t get stakeholders any closer to settling the question over content moderation. The United States could continue a free-market approach that allows the platforms to dictate content based on consumer behavior, or it could pursue a controlled approach through regulations and other legal safeguards.
“The debate will continue because we do not have current guidelines or laws to regulate content, and we’ll see potentially more debate as we get closer to the election.”— Pinar Yildirim
Yildirim cited the Network Enforcement Act in Germany that requires social media platforms to remove hate content and false information. Since the law passed in 2017, there’s been a measurable decline of online hate speech, particularly anti-refugee content, she said. Researchers have also found a decrease in offline incidents of hate crimes.
“The purpose of these regulations is not necessarily to give government power over what issues are top of mind, but also to protect individuals,” she said. “This is why it becomes really difficult to decide what’s the right thing to do. We want to protect individuals, and individuals have very different preferences on what they want from the government.”
Even if policymakers design regulatory protections that are well intended, she said, there’s nothing to prevent a subsequent administration from manipulating the restrictions to their own advantage.
Yildirim noted that the topic of censorship is as old as government itself. History is littered with examples of political leaders exerting influence over traditional forms of media. In the modern era, that tension is complicated by new media, including generative AI and deep fakes, and the documented mental health toll that social media is taking on some users, particularly children and teens.
“As a result of all these potential issues, we will see more debates and we’ll see more proposals on trying to regulate content on social media through the government,” she noted. “That being said, we don’t always think that high-level, top-down regulations are more effective to protect individuals, compared to letting markets decide.”