Serious concerns have arisen in the past week over how social media firms guard the privacy of their users’ personal data, and how the analytics of such data can influence voter preferences and turnout. Those worries follow a whistleblower’s account to The Observer newspaper in the U.K. about how Cambridge Analytica, a data analytics firm with offices in London and New York City, had unauthorized access to more than 50 million Facebook profiles as it micro-targeted voters to benefit Donald Trump in the 2016 U.S. presidential election.
In the fallout, Facebook faces its toughest test on privacy safeguards, and its founder and CEO, Mark Zuckerberg, has been summoned by MPs in the U.K. He faces similar calls from the U.S. Congress and from India, with revelations that Cambridge Analytica worked to influence the 2016 Brexit referendum and elections in India, Nigeria and other countries as well.
U.S. special counsel Robert Mueller is already examining Cambridge Analytica’s ties with the Trump campaign as part of his probe into Russia’s alleged meddling in the 2016 presidential election. Significantly, U.S. billionaire and conservative fundraiser Robert Mercer had helped found Cambridge Analytica with a $15 million investment, and he recruited former Trump advisor Steve Bannon, who has since left the firm. The firm initially sought to steer voters towards presidential candidate Ted Cruz, and after he dropped out of the race, it redirected its efforts to help the Trump campaign.
In order to gain insights into the fallout from the Cambridge Analytica scandal, Knowledge at Wharton spoke to Wharton marketing professors Ron Berman and Gideon Nave; Jennifer Golbeck, director of the social intelligence lab and professor of information studies at the University of Maryland; and Sinan Aral, management professor at MIT’s Sloan School of Management. Golbeck and Aral shared their views on the Knowledge at Wharton show on SiriusXM channel 111. (Listen to the full podcast using the player at the top of this page.)
“We’re experiencing a watershed moment with regard to social media,” said Aral. “People are now beginning to realize that social media is not just either a fun plaything or a nuisance. It can have potentially real consequences in society.”
“People are now beginning to realize that social media is not just either a fun plaything or a nuisance. It can have potentially real consequences in society.”–Sinan Aral
The Cambridge Analytica scandal underscores how little consumers know about the potential uses of their data, according to Berman. He recalled a scene in the film Minority Report where Tom Cruise enters a mall and sees holograms of personally targeted ads. “Online advertising today has reached about the same level of sophistication, in terms of targeting, and also some level of prediction,” he said. “It’s not only that the advertiser can tell what you bought in the past, but also what you may be looking to buy.”
Consumers are partially aware of that because they often see ads that show them products they have browsed, or websites they have visited, and these ads “chase them,” Berman said. “What consumers may be unaware of is how the advertiser determines what they’re looking to buy, and the Cambridge Analytica exposé shows a tiny part of this world.”
A research paper that Nave recently co-authored captures the potential impact of the kind of work Cambridge Analytica did for the Trump campaign. “On the one hand, this form of psychological mass persuasion could be used to help people make better decisions and lead healthier and happier lives,” it stated. “On the other hand, it could be used to covertly exploit weaknesses in their character and persuade them to take action against their own best interest, highlighting the potential need for policy interventions.”
Nave said the Cambridge Analytica scandal exposes exactly those types of risks, even as they existed before the internet era. “Propaganda is not a new invention, and neither is targeted messaging in marketing,” he said. “What this scandal demonstrates, however, is that our online behavior exposes a lot about our personality, fears and weaknesses – and that this information can be used for influencing our behavior.”
In Golbeck’s research projects involving the use of algorithms, she found that people “are really shocked that we’re able to get these insights like what your personality traits are, what your political preferences are, how influenced you can be, and how much of that data we’re able to harvest.”
Even more shocking, perhaps, is how easy it is to find the data. “Any app on Facebook can pull the kind of data that Cambridge Analytica did – they can [do so] for all of your data and the data of all your friends,” said Golbeck. “Even if you don’t install any apps, if your friends use apps, those apps can pull your data, and then once they have that [information] they can get these extremely deep, intimate insights using artificial intelligence, about how to influence you, how to change your behavior.” But she draws a line there: “It’s one thing if that’s to get you to buy a pair of shoes; it’s another thing if it’s to change the outcome of an election.”
“What consumers may be unaware of is how the advertiser determines what they’re looking to buy, and the Cambridge Analytica exposé shows a tiny part of this world.”–Ron Berman
An Expanding Scandal
Although Cambridge Analytica’s work in using data to influence elections has been controversial for at least three years, the enormity of its impact emerged last Saturday. The whistleblower, Christopher Wylie, who had worked with Cambridge Analytica, revealed to The Observer how the firm harvested profiles of some 50 million Facebook users. The same day, the New York Times detailed the role of Cambridge Analytica in the Trump campaign.
Facebook had allowed Cambridge University researcher Aleksandr Kogan access to data for an innocuous personality quiz, but Kogan had passed it on without authorization to Cambridge Analytica. Wylie told The Observer: “We exploited Facebook to harvest millions of people’s profiles. And built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.”
Meanwhile, the U.K.’s Channel 4 News captured in a video sting the strategies Cambridge Analytica used in its work to “change audience behavior,” which included “honey traps” and the use of prostitutes. Cambridge Analytica rejected the Channel 4 report, stating, “The report is edited and scripted to grossly misrepresent the nature of those conversations and how the company conducts its business.” Last March, Alexander Nix, the now-suspended CEO of Cambridge Analytica, spoke about his firm’s work for the presidential campaign of Ted Cruz, and then for Donald Trump, during his keynote address at the Online Marketing Rockstars event in Hamburg, Germany.
Transparency Paradox for Facebook
According to Aral, the Cambridge Analytica scandal could have “a potentially chilling effect on the very resource that we need to get to the bottom of the effect of social media on our democracy, our economy and even our public health.” Facebook is facing “a transparency paradox,” he said. “On one hand, it is being pressured strongly to be more open and transparent about how its advertising targeting algorithms work, how its news feed algorithms work, or how its trending algorithms work. On the other hand, it is being strongly pressured to be more secure about the release of data for research. If we’re going to thread this needle, [Facebook] has to find a way to be more open and transparent and secure at the same time.”
“Facebook has tried to play both sides of [the issue],” said Golbeck. She recalled a study by scientists from Facebook and the University of California, San Diego, that claimed social media networks could have “a measurable if limited influence on voter turnout,” as The New York Times reported. “On one hand, they claim that they can have a big influence; on the other hand they want to say ‘No, no, we haven’t had any impact on this.’ So they are going to have a really tough act to play here, to actually justify what they’re claiming on both sides.”
“If we’re going to thread this needle, [Facebook] has to find a way to be more open and transparent and secure at the same time.”–Sinan Aral
Facebook appears to have decided to tackle the problem head-on. On Wednesday, Zuckerberg posted a candid statement: “We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you.” On March 21, he told CNNMoney that he would ensure that developers don’t have as much access to Facebook data as Kogan had. “We [also] need to make sure that there aren’t any other Cambridge Analyticas out there, or folks who have improperly accessed data. We’re going [to] investigate every app that has access to a large amount of information … and if we detect any suspicious activity, we’re going to do a full forensic audit.” He said Facebook would also build a tool that would let users know if Cambridge Analytica had accessed their data.
How It All Began
In his Facebook post, Zuckerberg listed the sequence of events from his standpoint: In 2013, Kogan secured access to data on some 300,000 Facebook users for a personality quiz app he was developing. Because those users also shared their friends’ data, Kogan was eventually able to access data for tens of millions of users, Zukerberg wrote. A year later, in an attempt to prevent abuse, Facebook changed its platform “to dramatically limit the data apps could access.” In 2015, Facebook learned from journalists at The Guardian that Kogan had shared user data with Cambridge Analytica. It banned Kogan from its platform and required he and Cambridge Analytica to certify that they had deleted the improperly acquired data, and they complied. The tipping point came last week when Channel 4, The New York Times and the The Guardian reported that Cambridge Analytica might not have deleted the data as it had certified.
Kogan told the BBC that he is being made a scapegoat and that he believed he acted “perfectly appropriately” in his handling of Facebook users’ data. “The project that Cambridge Analytica has allegedly done, which is to use people’s Facebook data for micro-targeting, is the primary use case for most data on these platforms,” he said, according to a Guardian report. “Facebook and Twitter and other platforms make their money through advertising, and so there’s an agreement between the user of ‘hey, you will get this amazing product that costs billions of dollars to run, and in return we can sell you to advertisers for micro-targeting’”
Cambridge Analytica’s Reach
According to Nave, it is impossible to know exactly how influential Cambridge Analytica was in its efforts to sway voters towards Trump. “There is obviously a strong interest of both Cambridge Analytica and the media to make over-claims,” he said, adding that information on the strategies used by rival parties is also not available. “Having said that, the differences between the candidates in key swing states were tiny, and even if the influence of such campaigns were very small, it could have led to meaningful effects at the aggregate level, because of the electoral college system.”
Berman said that Cambridge Analytica would “overstate their absolute effect, or what percentage of people changed their vote, but it’s possible that the impact was still large.” Research has shown that “it is extremely hard to change voting patterns by shifting one’s opinion,” he added. “It is possible that Cambridge Analytica’s actions caused more voter turnout for the people they were targeting, or that there were marginal – undecided – voters they were able to influence.” He noted that because elections in the U.S. are usually determined by “a very small majority, any shift of 1%, although small in its absolute effect, will have a relatively big impact.”
“Even if the influence of such campaigns were very small, it could have led to meaningful effects at the aggregate level [in Trump’s election], because of the electoral college system.” –Gideon Nave
Aral advised caution in reading too much into the degree of influence that Cambridge Analytica might have had on Trump’s election. “We need a lot more research if we’re really going to understand this problem.” Randomized experiments he has conducted measuring influence and susceptibility to influence in social media underscored that uncertainty, he noted. “We just don’t have definitive evidence that can either confirm or deny or quantify the effect of these types of methods on persuasion and/or changing voting behavior enough to change an election outcome or not.”
Finding a Solution
Golbeck called for ways to codify how researchers could ethically go about their work using social media data, “and give people some of those rights in a broader space that they don’t have now.” Aral expected the solution to emerge in the form of “a middle ground where we learn to use these technologies ethically in order to enhance our society, our access to information, our ability to cooperate and coordinate with one another, and our ability to spread positive social change in the world.” At the same time, he advocated tightening use requirements for the data, and bringing back “the notion of informed consent and consent in a meaningful way, so that we can realize the promise of social media while avoiding the peril.”
Regulation, such as limiting the data about people that could be stored, could help prevent “mass persuasion” that could lead them to take action against their own best interests, said Nave. “Many times, it is difficult to define what one’s ‘best interest’ is – and sometimes there’s also a tension between one’s interest and society’s interest,” he added. “Does having broccoli for dinner make you better off than having pizza? Does having a picnic with your family on election day serve your best interest more or less than going to vote? In many cases, the answer is not straightforward.”
How Users Could Protect Themselves
Historically, marketers could collect individual data, but with social platforms, they can now also collect data about a user’s social contacts, said Berman. “These social contacts never gave permission explicitly for this information to be collected,” he added. “Consumers need to realize that by following someone or connecting to someone on social media, they also expose themselves to marketers who target the followed individual.”
In terms of safeguards, Berman said it is hard to know in advance what a company will do with the data it collects. “If they use it for normal advertising, say toothpaste, that may be legitimate, and if they use it for political advertising, as in elections, that may be illegitimate. But the data itself is the same data.”
According to Berman, most consumers, for example, don’t know that loyalty cards are used to track their behavior and that the data is sold to marketers. Would they stop using these cards if they knew? “I am not sure,” he said. “Research shows that people in surveys say they want to maintain their privacy rights, but when asked how much they’re willing to give up in customer experience – or to pay for it – the result is not too much. In other words, there’s a difference between how we care about privacy as an idea, and how much we’re willing to give up to maintain it.”
Golbeck said tools exist for users to limit the amount of data they let reside on social media platforms, including one called Facebook Timeline Cleaner, and a “tweet delete” feature on Twitter. “One way that you can make yourself less susceptible to some of this kind of targeting is to keep less data there, delete stuff more regularly, and treat it as an ephemeral platform,” she said.
“You don’t want this big piece of how society operates just blocked off, accessible only to Facebook and basically the people who are going to help them make money.” –Jennifer Golbeck
She also suggested programs to teach kids, starting with elementary school, about what it means to have data about oneself, what others could do with that data, and “ramp up people’s literacy about the algorithms and the influence.”
According to Berman, some level of lost privacy will exist as long as advertising is allowed on social media, because it funds the free service. “One safeguard is very strict disclosures by data collectors of how they intend to use the data when they collect it,” he said. “This will of course have an enforcement issue, but it may help to start making it clear to marketers what [constitutes] legitimate and illegitimate use of the data.”
Legitimate Uses of Data
Golbeck worries that in trying to deal with the fallout from the Cambridge Analytica scandal, Facebook might restrict the data it makes available to researchers. “You don’t want this big piece of how society operates just blocked off, accessible only to Facebook and basically the people who are going to help them make money,” she said. “You want academic researchers to be able to study this.” But balancing that with the potential for some academic researchers to misuse it to make money or gain power is a challenge, she added
Aral described Cambridge Analytica as “a nefarious actor with a very skewed understanding of what’s morally right and wrong.” He pointed out that there’s an important line to be drawn between the appropriate uses of technology “to produce social welfare” through platforms like Facebook, and the work that Cambridge Analytica did. “It would be a real shame if the outcome was to, in essence, throw the baby out with the bathwater and say that the only solution to this problem is to pull the plug on Facebook and all of these social technologies because you know there’s no way to tell the difference between a bad actor and a good actor.”
All said, sophisticated data analytics “may also be used for generating a lot of good,” said Nave. “Personalized communication may help people to keep up with their long-term goals [such as] exercise or eating healthier, and get products that better match one’s needs. The technology by itself is not evil.”