Part 2 of this two-part opinion series on fake news by Eric K. Clemons, a Wharton professor of operations, information and decisions, looks at “how private information is used to aid in the targeted distribution of fake news,” in part via an analysis using a computer-simulation model. Clemons notes that his studies are intended to help better understand “the threat of fake news, and to guide the development of policy to distinguish between legitimate journalism and the crafting of fake news.” Clemons is also the author of  New Patterns and Profit: A Strategist’s Guide to Competitive Advantage in the Age of Digital Transformation.

As we discussed in the previous story, fake news has progressed beyond simple broadcast of lies. It now entails carefully constructed stories that are designed to produce the desired change in beliefs among their intended readers. Since they are carefully constructed to resonate with the beliefs and prejudices of their intended readers, and carefully constructed not to be detected as false by contradicting the readers’ knowledge and understanding of facts, targeting matters. Each group of fake news stories is intended for just a single set of readers, and each group of stories is targeted at and distributed to just those readers. Fake news is now effective enough to alter the outcome of close elections and polls and referendums, and subvert the will of the people. More importantly, fake news is effective enough to destroy social cohesion and prevent nations from having the will to act.

My previous post described how private information from social media is harvested to aid in the crafting of fake news. This post analyzes how private information is used to aid in the targeted distribution of fake news. I am not aware of any prior examination of the effectiveness of different strategies for the dissemination of fake news. I did not perform these studies in order to improve the ability of fake news craftsmen to manipulate elections or their ability to create social discord. Rather, these studies were designed to increase understanding of the threat of fake news, and to guide the development of policy to distinguish between legitimate journalism and the crafting of fake news.

The technique of resonance fake news is based on having extremely accurate knowledge of the beliefs of target voters, their fears and their desires and the limitations of their expertise. We explore using fake news to benefit proposition Red; for concreteness, and to be consistent with the previous post, assume that Red represents the belief that climate change is not occurring. “Green” represents the belief that climate change is real.

Computer Modeling

We use a computer model of voter response to conduct four simulation experiments, in order to explore the impact of three strategies for the distribution of fake news. The model was constructed using a simulation package based on Systems Dynamics, which is ideally suited to modeling the changes in populations over time as they are subject to a variety of forces.

“Regrettably, developing a policy and a legal code for dealing with modern fake news is more complicated than identifying modern fake news.”

We use the model to assess the impact of the different strategies for the distribution of fake news, and assess their impacts on the outcome of an election between the Red proposition and the Green proposition. In our experiment, the Red proposition uses a fake news campaign to strengthen its support. In contrast, the Green proposition does not deploy fake news but instead is the victim of fake news. Under our assumptions, the Green proposition is slightly more credible. If there is no fake news campaign, Green wins with 55% of the votes cast.

We find that targeting of fake news stories to their intended readers is clearly critical to their success.

  • Our first experiment is an Untargeted Broadcast Campaign: A set of 15 fake news stories is broadcast to all voters, with each story attempting to move voters to accept the Red proposition. A small percentage of voters already sympathetic to proposition Red will become more sympathetic, more committed or both. However, voters initially sympathetic to Green will likewise become more sympathetic to Green, more committed or both. Green still wins, with 53% of the vote. The percentages of Green voters affected by the backlash is less than that of the Red voters affected by the fake news campaign, but even after a saturation campaign, Green still holds a slight edge.
  • Our second experiment is a Targeted Narrowcast Campaign: Once again, a set of 15 fake news stories is distributed, but now it is narrowcast only to those segments sympathetic to Red. Again, a small percentage of voters already sympathetic to proposition Red will become more sympathetic, more committed or both. However, since the campaign is narrowcast only to voters sympathetic to Red, the possibility of a Green backlash is greatly reduced. With no backlash, Red now wins with 50.2% of the vote. It’s a razor-thin margin. But it is enough.
  • Our third experiment also involved a Targeted Narrowcast Campaign: Once again, a set of 15 fake news stories is distributed. However, in this experiment we make slightly different assumptions. We acknowledge that it is difficult to maintain total secrecy about an extended campaign of disinformation, especially with increased awareness of fake news, awareness of election manipulation and awareness of foreign attempts to alter election outcomes. If there is a backlash that begins midway through the fake campaign, after eight cycles of fake news stories, that is sufficient for Green to retain an advantage, albeit a small one, with 52% of the votes. Although the percentages of Green voters affected by the backlash is less than that of the Red voters affected by the fake news campaign, even after a saturation campaign, Green still holds a slight edge when delayed backlash occurs.
  • Our fourth and final experiment is a Precision Narrowcast Campaign: This is the sort of campaign we discussed in our previous post, in which detailed private information is used to craft fake news stories to align with the beliefs, existing knowledge and demographics of their recipients. The response to each story is much greater, and consequently a smaller set of fake news stories is sufficient. Eight stories are distributed, rather than 15. Again, these stories narrowcast only to those segments sympathetic to Red, but they are precision targeted to each sub-segment of the Red voters. Each story is more effective. Fewer stories are needed. Again, voters already sympathetic to proposition Red will become more sympathetic, more committed or both. However, since the campaign is narrowcast only to voters sympathetic to Red, and is very short, the possibility of a Green backlash is now eliminated. Red wins with 51% of the vote.

Importance of Personalization

Our experiments demonstrate that the use of private and personal information is as important to the targeting of fake news stories as it is to their construction. With modern fake news, lies can be carefully constructed and carefully supported differently for each group of intended recipients.  Likewise, they can be carefully distributed so each story goes only to their intended recipients. This will maximize the desired impact. It will also eliminate the backlash produced when groups of readers recognize the stories as false and determine that a disinformation campaign is underway.

“Perhaps a regulatory policy based on cutting off access to the massive amounts of personal data needed to create and target fake news would succeed without limiting First Amendment rights”

In summary, we find that clumsy attempts to manipu­late popular opinion can be counterproductive. This is consistent with what we digital optimists believe would be true in the age of the internet, with a fully informed public able to assess the accuracy of all information. When people who are not deceived by a specific set of lies become aware that these lies are being broadcast, they often respond with hostility, producing a voter backlash.

However, our findings also show that an idea that is not originally embraced by a plurality of the electorate can still win the majority of the popular vote with effective and well-tailored modern fake news techniques. The ability to win with a fake news campaign increases the more it is narrowcast only to sympathetic voters, reducing backlash. More importantly, the ability to win with a fake news campaign increases the more it is tailored to the perceptions and limitations of individual readers, since precision tailored campaigns produce greater responses, can be shorter, and can be concluded before the opposition becomes aware of the campaign and before voter backlash occurs.

Regrettably, developing a policy and a legal code for dealing with modern fake news is more complicated than identifying modern fake news. Western democracies protect freedom of the press, because freedom of communication, freedom of information and the right to assemble are essential to the functioning of democracies. When the First Amendment was drafted, American statesmen did not envision that everyone would be able to draft convincing communications. Printing presses were rare and expensive; in contrast the internet is ubiquitous, and the worst offenders in the fake news business may not even be located in the US.

It’s easy to say that anyone who drafts different versions of a story on the same topic, and targets it at different individuals based on harvesting private personal information, is engaging in fake news. It’s hard to make that the basis of effective regulation. The easiest way to avoid a regulation addressing this definition of fake news would simply be to have different members of a fake news team responsible for different target populations, launched from different locations. And one can’t simply ban all attempts to target using personal information; that would destroy legitimate advertising, and indeed all attempts to identify like-minded individuals to form social clubs and other organizations.

“It’s hard to imagine constructing a definition of fake news based on detecting lying, because that requires someone with the authority to ascertain what is true.”

Fake news is also characterized by repeated use of lies. Unfortunately, it’s hard to imagine constructing a definition of fake news based on detecting lying, because that requires someone with the authority to ascertain what is true.  In today’s highly charged political climate I cannot imagine trusting a federal commission ascertaining the truth of claims about climate change, or about the legitimacy of the theory of evolution or of creation science. Indeed, with an executive branch that reserves the right to label as fake news everything it finds unflattering, perhaps it is not possible to regulate fake news without putting traditional high-quality journalism at risk.

The current fake news problem is new. Any solution that solves the problem without interfering with freedom of speech and freedom of the press will need to be carefully designed and driven by examination of the data being harvested. Perhaps a regulatory policy based on cutting off access to the massive amounts of personal data needed to create and target fake news would succeed without limiting First Amendment rights. The policy will certainly need to adapt, as the most skilled abusers of fake news adapt to the regulations placed upon them.

Details of Our Model

We assume that the alternative Green proposition enjoys a slight advantage in public support. That is, we assume that in the absence of a successful fake news campaign, Green would win a popular election.  We further assume that voters differ in how closely they agree with propositions Red and Green, and in how passionately they believe in the correctness of their views. Interested readers can examine our highly stylized voters in a voter attitude space, shown in figure 1. The popu­lation is distributed more heavily in the center of the belief space, which explains why Green enjoys a slight advantage. There are also four bands of commit­ment, where the outermost bands are the most committed, and most likely to vote. Consistent with recent electoral experience, we assume that the least committed segments have the greatest number of voters. Again, interested readers can check the distribu­tion of voters across segments and across bands, as shown in table 1.

Our computer model “shows” each fake news story to a “population of readers,”as each story is “published.”  We assume that each story is “read” by only part of the population, and that only some of the readers are actually affected by each story. More precisely, we assume that a small percentage of targeted recipients of fake news respond by becoming more sympathetic to the Red position, more committed and more likely to vote for Red, or both. We assume that a smaller percentage of untargeted and unsympathetic recipients of fake news experience backlash, and become more sympathetic to the Green position, more committed and more likely to vote for Green, or both. After the “population” “responds” to each story, the next story is “published.” The model tracks the final tallies of the “population” who would vote for Red or Green, which determines the outcome.

The model and computer program were originally developed for an academic publication; additional details are available from the author.

figure-1

Figure 1. The model is shown as a semi-circle to indicate that the distance be­tween inner grid segments is smaller than the dis­tance between outer grid segments, so that as voters move further out from the center, corre­spond­ing to greater ideologi­cal commit­ment, it becomes harder for outside agents to alter voters’ views.

Segment in Voter Attitude Space
1 2 3 4 5 6 7 8 9
     Band of Commitment A 1000 1250 1500 1750 2000 1750 1500 1250 1000
B 2000 2500 3000 3500 4000 3500 3000 2500 2000
C 3000 3750 4500 5250 6000 5250 4500 3750 3000
D 4000 5000 6000 7000 8000 7000 6000 5000 4000

Table 1.—Distribution of Voters in Voter-Attitude Space.