For years, social media companies have done relatively little to keep hate speech off of their platforms, often accepting racist, homophobic and anti-Semitic screeds and comments as the cost of doing business. More recently, though, social media has exploded onto the front lines in the battle over hate speech, free speech and the sociopolitical war gripping the U.S.
One big recent spark was provided by Alex Jones. The conspiracy theorist has long floated patently false claims that child-sex rings run by prominent public figures (Robert Mueller, Hillary Clinton) are operating right under our noses, and that the Sandy Hook shooting was a hoax staged by gun-control activists. In early August, social media companies decided they had had enough: YouTube took down Jones’s channel — with 2.4 million subscribers — saying it violated the firm’s policy on hate speech, and Apple dropped some of Jones’s InfoWars podcasts from its app for the same reason. Facebook removed some of his pages, saying they were “glorifying violence” and using “dehumanizing language to describe people who are transgender, Muslims and immigrants.”
Twitter hesitated, but eventually “permanently suspended” Jones and InfoWars for what it called repeated violations of its policy against abusive behavior.
Jones cried censorship. Now, social media companies are caught among multiple rocks and hard places. They want to create a pleasant environment for users (“safe,” in industry parlance), and yet they would like to be seen as upholding the American value of free speech. They enjoy the primacy once held by traditional media in this country, but they don’t want regulation and the responsibilities of mediating the truth that that industry exercised for decades.
Above all, perhaps, they want to keep growing users so they can keep growing profits.
“This issue is definitely a threat, because currently [social media firms] are on a roll, they make a lot of money and they are only growing in power,” says Gad Allon, director of Wharton’s Jerome Fisher Program in Management and Technology and professor of operations, information and decisions. “And so if the public is going to go against them, if the political class is going to go against them, they will find themselves in a very different kind of situation.”
Calls for blocking certain kinds of speech on social media have grown in recent months, in the U.S. and elsewhere. Former United Nations high commissioner for human rights, Zeid Ra’ad al-Hussein, accused Myanmar military officials of using social media to incite genocide, and he called on Facebook to remove content, which it did. The Sri Lankan government shut down Facebook, WhatsApp and other platforms in the country earlier this year after violence against Muslims. It was only after Facebook officials visited the country with a pledge to curtail hate speech and misuse that the ban was lifted.
Social media has been called to account for itself in numerous Congressional hearings. Facebook chief Mark Zuckerberg, asked during testimony this past April to define hate speech, said: “Senator, I think this is a really hard question, and I think it’s one of the reasons why we struggle with it.” Zuckerberg has resisted calls to have Facebook take down pages of Holocaust deniers.
“I feel empathy for the leaders of these organizations, because I believe they are conscientious and want to do the right thing, but it is hard to know what the right thing to do is.”–Christopher Yoo
Some see social media companies as exercising too much editorial control, as well as feeding back to people what they already believe. “The worry is that social media is creating an echo chamber effect that reinforces polarization in our society, and the solution to that is to radically limit social media’s control over what information gets passed on or what doesn’t,” says Christopher S. Yoo, director of the University of Pennsylvania Law School’s Center for Technology, Innovation & Competition and professor of law, communication and computer and information science.
“On the other hand, in the aftermath of the 2017 elections, there is enormous concern that false or misleading information is being conveyed by social media, and the solution there is for them to exercise more editorial control. Add this to Cambridge Analytica and Trump’s calls to regulate search results over what comes up when he Googles his name, and social media doesn’t know where to jump,” says Yoo. “I feel empathy for the leaders of these organizations, because I believe they are conscientious and want to do the right thing, but it is hard to know what the right thing to do is.”
But what if, through hate speech and becoming a frightening and depressing atmosphere, the Facebook news feed becomes a place that the public begins to avoid?
“That’s the biggest fear for Facebook,” says Allon. “That people will view it as a fearful place — if I want to feel bad that’s where I will go. That’s why they never want to show you opposing views, because it may anger you. The moment you think about Facebook the same way as smoking, that’s the death of Facebook.”
The Right to Say Anything
Social media companies may or may not decide to do something about hate speech. But right now, legally speaking, they are not compelled to do anything.
“Strictly as a matter of First Amendment law, they can do whatever they want. They could say, ‘We’re only going to publish people who are members of the Republican party,’ and there is nothing to prevent Facebook from doing what Trump is accusing them of doing,” says Nadine Strossen, law professor at New York Law School, immediate past president of the American Civil Liberties Union and author of HATE: Why We Should Resist It with Free Speech, Not Censorship. Discrimination laws might prevent them from discriminating on the basis of race and other factors, “but certainly not political ideology.”
The First Amendment concerns only government control of free speech, noted John Carroll, professor of mass communication at Boston University, in a recent conversation on the Knowledge at Wharton show on SiriusXM. (Listen to the full podcast at the top of this page.) Social media companies “have been really reluctant to remove content from Alex Jones in terms of … [it being] fraudulent content,” he said. “What they have done is said, ‘This is hate speech, and we have the right to remove it under our terms of service’ — and as a private business, they absolutely have that right.”
In fact, many Americans perceive social media as playing an active role in censorship. When asked whether they think it likely that social media platforms actively censor political views that those companies find objectionable, 72% of respondents to a June Pew Research Center survey said yes. Republicans were especially inclined to think so: 85% of Republicans and Republican-leaning independents said it was likely that social media sites intentionally censor political viewpoints, with 54% saying it was very likely, found the Pew survey of 4,594 U.S. adults.
Social media companies routinely deny that they are actively censoring political views, and the tendency away from censorship was built into the structure of social media long before the term social media came into use. Section 230 of the Telecommunications Act of 1996 established protection from liability for a provider or user of an “interactive computer service” — as opposed to publishers — for carrying third-party content. In other words, it firmly established what would become social media as a largely unmediated bulletin board.
“The moment you think about Facebook the same way as smoking, that’s the death of Facebook.”–Gad Allon
“This is why social media companies, when they first came on the public scene, said, ‘We are not media companies; we are tech companies,’” said Strossen. “They knew they had the right and power to act as traditional media companies and serve an editorial function in choosing what to publish and what not to publish, but deliberately said, ‘We are choosing to not engage in that kind of content discrimination, and will let all voices have equal access to our platforms.’”
In avoiding the gatekeeper role, social media established itself as being no more liable for messages conveyed than telephone companies were liable for conversations traveling over their phone lines.
Section 230 created “a safe harbor for Good Samaritan blocking of obscene, filthy, harassing or objectionable material,” says Yoo, “to give companies as conveyors of information latitude to exercise some editorial discretion without liability, to balance these concerns.”
The courts, however, haven’t provided great clarity on the question of how much control they should exercise. “If we take the statute seriously, social media companies’ control is limited to things that are obscene or harassing,” Yoo notes. “There have been court decisions interpreting this liability as extending to categories very broadly, which would give social media companies the latitude to control their newsfeeds. And then there are courts that have interpreted it narrowly, in which case companies would face a great deal of liability, so there is fair amount of legal uncertainty.”
The problem with the phone-line analogy is that no one picks up their phone to find him or herself eavesdropping on thousands of white supremacists and Holocaust deniers. Facebook’s community standards statement says the platform does not allow hate speech “because it creates an environment of intimidation and exclusion and in some cases may promote real-world violence. We define hate speech as a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. We also provide some protections for immigration status. We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.”
Questions of Interpretation
What ensues are some thorny questions around who gets to interpret; the biases and life experiences the individual interpreter brings to the task; and larger questions of context that algorithms are unable to consider.
Facebook, for instance, recently flagged the Declaration of Independence, removing paragraphs 27-31 when a community newspaper in Texas published it in the days leading up to the Fourth of July. It wasn’t clear whether snipping out part of our national guiding principle was purely algorithmic or involved a layer of human review, but the trigger appears to have been a reference to “merciless Indian Savages,” according to Slate.
On the slippery slope of regulating speech, what is considered free and legitimate speech by one group might be considered inciteful by another, and user agreements are of limited help, says Ron Berman, a Wharton marketing professor. “Many of these agreements use the grey line between an illegitimate behavior on the platform, and illegitimate consequence, which is very problematic. For example, a call for Catalonian independence from Spain on Facebook may be considered free legitimate speech by a large group [of Catalonians], but if it later causes a violent protest, it may become [seen as] illegitimate.”
“From a public-relations perspective, I think the issue is less about investors and regulators, and more about advertisers who may decide to stop using Facebook as an advertising platform because it will be seen as allowing hate speech.”–Ron Berman
Pressure is building on social media firms to do something about hate speech, and “no doubt, that threat of regulation will have an impact on the culture of these companies,” says Strossen. But regulating speech would be a grave mistake, she says. Even if Alex Jones did violate social media community standards by engaging in disparaging, dehumanizing, degrading and demeaning ideas, “one person’s view of what that concept is is antithetical to another’s,” she says. “Some say Black Lives Matter is demeaning to others. Some say All Lives Matter is racist because it is insensitive to those whose lives are in jeopardy. These are all subjective matters, so the only solution is not suppressing free speech. There is more harm in empowering government officials or private-sector actors with making these discretionary decisions.”
But social-media sites do have a legitimate business argument for stamping out hate speech as much as possible. One risk with two-sided platforms like Facebook is that they can quickly have a “phase” shift from a positive state to a negative state, says Berman. “For example, if it turns out that the Facebook ad-targeting algorithm allows advertisers to discriminate based on race, gender or any other factor, or that the targeting algorithm would make it possible to promote hate speech, other advertisers … would not want to appear as condoning this advertising platform,” he says. “From a public-relations perspective, I think the issue is less about investors and regulators, and more about advertisers who may decide to stop using Facebook as an advertising platform because it will be seen as allowing hate speech.”
Facebook, YouTube and Twitter are hiring thousands of new moderators, or “News Feed integrity data specialists,” as Facebook calls them, to filter out content it considers to be in violation of its standards. But moderators are inconsistent, and that inconsistency puts minority users of social media at a disadvantage, according to a report last year by the Center for Investigative Reporting. The report cited Facebook users whose posts on racial matters were deleted by Facebook, but whose white friends, when asked to post the same content, found their posts were not deleted.
Don’t hold your breath for justice consistently applied. “The standards are irreducibly subjective, so the standard will be enforced with the subjective values of the enforcer,” says Strossen.
In the Silicon Valley mindset, however, there is a belief that everything can be solved algorithmically — “that there is a technical solution to every societal problem,” says Allon. “They believe they have the solution but just have not found it yet.”
The Wisdom of the Free Market
The other way of looking at the situation is that social media, as an industry, is still green. “To some extent, I think social media companies are going through a high-tech rite of passage,” says Yoo. “Many technologies are born and enjoy an initial period of benign neglect, and don’t spend much time thinking about the broader social impact of their products and the possibility they might be regulated.”
Strossen says what is needed to help combat hate speech is better media skills. “If I had to choose, I’d rather have more [guidance for people in sorting] the truth from that which is false, helping them to navigate to find messages that are supportive of how to facilitate their own effective counter-speech against hate speech, and to reach out to hate mongers to help them change their views.”
“Just as you get more hate speech through these new technologies, you also have much more effective response to hate speech.”–Nadine Strossen
What’s important to remember, she says, is that while a lot of negativity has been let loose in the world as a result of social media, a lot good causes have also traveled far and wide. “Just as you get more hate speech through these new technologies, you also have much more effective response to hate speech. The other speech going on is incredibly inspiring. You could not have had the social-justice movement, from Black Lives Matter to #MeToo and the anti-gun movement. They really flourished thanks to social media.”
Allon says social media companies need to be more transparent about how they decide what is hate speech, and what they choose to do about it: “How do these algorithms work? How do they decide what I see and what I don’t see?”
One obvious solution for encountering less hate speech and providing safe zones would be to have a variety of social media platforms available to suit different tastes — one place that truly is about sharing vacation photos and getting in touch with high school friends, and others more political and controversial. Why isn’t this kind of sorting — through free-market dynamics — happening?
“I do think, actually, it is,” says Yoo. “If you want to see the future trends, look at what people just entering the market are doing, and that is what young people are doing. They are on multiple social media platforms simultaneously, and for them different platforms serve different purposes. So I think you are starting to see diversification among social media, and I think that is a good healthy development.”
But it’s also important to note that Facebook, Twitter and Google combined “basically monopolize” the digital information environment, said David Karpf, associate director of George Washington University’s School of Media & Public Affairs, who joined B.U.’s Carroll on the Knowledge at Wharton show. “If those three shut you down, then it becomes tremendously hard to reach a massive audience.”
Facebook had 2.23 billion monthly active users as of June 30. Considered alongside the current total number of social media users globally, 3.3 billion, that raises the question: Is Facebook simply too big today to be considered a social media platform and business in the usual sense? Is it really more like a public utility because of its scale and ubiquity?
“I don’t think so,” says Yoo. “People forget, when they are concerned about the dominance of Facebook, that 10 years ago it didn’t really exist. What we see in the broad scale is that new players have come up or older players have reinvented themselves in dramatic ways, which indicates the market is incredibly dynamic. We forget that if we were having this discussion a decade ago we might be talking about MySpace — or two decades ago, AOL. The AOL-Time Warner merger was treated like the end of history, and as it turns out, [it was only] the end of $200 billion worth of shareholder value…. Google is a company [that is] only 20 years old. Apple, until it reinvented itself, was in the doldrums. These are incredibly dramatic changes that are the sign of an industry that is constantly buffeted by the gales of creative destruction in a very positive way.”