University of Maryland's Jennifer Golbeck and Penn's David Elliot Berman discuss the impact of fake users on social media.

Social media users, advertisers and regulators were aghast this past week over revelations in a report by The New York Times of a thriving cottage industry that creates fake followers on Twitter, Facebook or other channels for anybody willing to pay for them. Called “bots,” these fake accounts are available in the thousands to those that want to boost their popularity with tweets or retweets on Twitter, or Facebook likes or shares.

Although Twitter and Facebook officially frown on users buying followers and regularly take down fake accounts, they have a vested interest in the popularity scores of their users because advertisers use those metrics. The political will also may not be readily available to legislate against buying followers, experts say, pointing out that some of President Trump’s appointees also bought followers, in addition to others such as computer billionaire Michael Dell and Treasury Secretary Steve Mnuchin’s actress wife Louise Linton.

“This is a dirty and open secret of social media,” said Kartik Hosanagar, Wharton professor of operations, information and decisions. “This has been going on for a while, and The New York Times article finally puts the spotlight on this shadow economy. Overall, social media is a complete mess right now in terms of the sanctity of information circulating on it.”

At least one law enforcer has been quick to act — New York attorney general Eric T. Schneiderman. “The growing prevalence of bots means that real voices are too often drowned out in our public conversation,” he tweeted last Saturday after the article was published, adding that “impersonation and deception are illegal” under New York law. “Those who can pay the most for followers can buy their way to apparent influence.”

Later that day, Schneiderman’s office launched an investigation into Devumi, a social media marketing services firm in Florida that The New York Times report highlighted as one of those behind the fake followers. “Drawing on an estimated stock of at least 3.5 million automated accounts, each sold many times over, [Devumi] has provided customers with more than 200 million Twitter followers,” the newspaper’s investigation found.

“This is a dirty and open secret of social media. Overall, social media is a complete mess right now in terms of the sanctity of information circulating on it.”–Kartik Hosanagar

The Attention Economy

Jennifer Golbeck, professor and director of the Social Intelligence Lab at the University of Maryland, noted that the phenomenon of buying “attention” has been around for a while. She pointed to online contests where, say, somebody posting a picture of their pet may pay for votes as they canvass for them.

“You actually can go buy from some of the same companies hundreds or thousands of votes in online contests, and they exist obviously on platforms like Twitter and Facebook,” Golbeck said. She has also bought YouTube views as part of her research. “It’s super simple. For example, if you have a tweet that you want to get likes for or have retweeted, you go to the website of one of these companies (like a Devumi), you paste in the link to that tweet, and you pay them via PayPal. As soon as your payment processes, you can watch the ‘like’ count zip up in the next two or three minutes.”

Devumi’s rates start from $10 for 500+ followers, delivered within two days. Devumi’s founder, German Calas, denied that his company sold fake followers and said he knew nothing about social identities stolen from real users, The New York Times report noted.

“Devumi is one of perhaps hundreds of a cottage industry of websites that sell Facebook likes, Twitter followers and so forth,” said David Elliot Berman, a doctoral candidate at the University of Pennsylvania’s Annenberg School of Communication, who has also been studying the issue of fake followers. “Devumi is a reseller or wholesaler of bots created mostly in the Third World, and its customers are people who want to boost their social media presence and essentially engage in a form of ‘attention hacking.’”

Golbeck and Berman discussed the fake-follower controversy on the Knowledge at Wharton show on Wharton Business Radio on SiriusXM channel 111. (Listen to the podcast at the top of this page.)

Wharton marketing professor Pinar Yildirim explained the extent of damage that fake followers could create. The first issue relates to identity theft or identity copying, where people’s personal information or photos are used to create fake accounts. The second issue is the misperception and misinformation that rises from these accounts. “Topics that are highly mentioned, tweeted, tagged may seem important, or supported, or liked by others when in fact they are not,” she said.

Yildirim noted that it may not be overly significant when it is about a celebrity or a brand that pretends to be liked and endorsed by many. But the damage could be worse if politicians and public figures make statements which do not find large support in public, “but because of fake accounts and bots look like they do,” she said. “[That] is likely to create negative emotions [such as] rage, anger, and an even unnecessary divide between the masses.” Fake news and the diffusion of fake news, for instance, are partly due to the paid fake accounts, she noted.

Fake followers will end up undermining the value of social media platforms, Wharton marketing professor Ron Berman predicted. “What will eventually happen is that users and other stakeholders will start realizing many tweets are fake, or non-representative, and will basically start ignoring what’s posted on Twitter,” he said. “The biggest impact will probably be when journalists, who try to use Twitter as a source of information, will start being more sophisticated in verifying the validity of tweets. As for regulators, I am not sure they will need to intervene, as long as users realize anyone can tweet anything.”

Conflict of Interest

According to Hosanagar, Twitter, Facebook and others need to do more to curb the practice of fake followers. But their business models may come in the way of those efforts. “They are currently not incentivized enough to clamp down on these types of accounts because it suits them well,” he said. “But the long-term implications will be dramatic,” he warned.

David Berman said the practice of buying followers suits the business model of social media platforms. “They are incentivized to encourage their users to pursue these kinds of activities,” he said. “They want people to create viral content. They want people to create engaging content because that makes their platform more sticky and makes it more attractive for people to stay on. This is an ‘attention economy,’ which is based on self-promotion, and social bonds are one way to address this.”

Advertisers would also not take kindly to fake followers, said David Berman. “When people advertise on Facebook, they expect that their advertisements will reach a certain number of people – flesh and blood people,” he noted. “[They would see] a significant inefficiency because they’re spending money that’s not actually being used to get their content out to real people.”

“When people advertise on Facebook, they expect that their advertisements will reach a certain number of people – flesh and blood people.” –David Elliot Berman

According to The New York Times, an estimated 48 million of Twitter’s reported active users, or about 15% of its total user base, are “automated accounts designed to simulate real people,” and that Facebook had acknowledged in November it may have about 60 million automated users.

Refinements in how social media companies measure the numbers of their followers might help, Ron Berman suggested. “Measuring the value of online followers is quite hard, so people resort to using simple metrics such as follower counts,” he said. “If the metric were ‘how many of the followers are valuable for me,’ then social media platforms would have had an incentive to take fake accounts off their platforms. But as long as counts matter, they clearly will not have an incentive to intervene too strongly.”

The Evolution of Fake Followers

Over the years, the marketplace has provided openings for fake followers to step in. Hosanagar recalled that some eight to nine years ago, social media marketing was focused almost exclusively on acquiring more and more followers. As a result, lots of brands and individuals spent money on acquiring followers, and marketing companies cropped up that offered to build one’s follower base, he said.

“[However,] in practice, most of these followers did not engage with the social media posts of brands and so-called influencers,” Hosanagar continued. “In response, the emphasis shifted to engagement – how many of your followers actually like, comment on or share your social media posts.

“Posts that show brand personality — like emotional, humorous posts — or ones that emphasize your philanthropic activities [get more traction],” he noted. That has been evidenced by his research on aspects of social media content that are associated with greater engagement.

Although the emphasis shifted towards engagement for social media companies, “that didn’t solve the problem,” said Hosanagar. “These shadow economies doubled down on creating bots and fake accounts that would share or retweet their clients’ posts. So, engagement on someone’s posts looks high even though there is no real engagement. In some cases, the clients are not even aware this is happening and are innocent victims. In other cases, they fully understand it and find it convenient to play along.”

Ron Berman is surprised that the manipulation of follower counts is being described as a “new” discovery. He said this issue has been known to marketers and Facebook advertisers for several years now. He pointed to a 2014 video expose by the website Veritasium on buying fake Facebook followers.

Companies like Facebook and Twitter typically try to work around these problems in three ways, “none of which is perfect,” said Yildirim. “First, data can tell a lot about the authenticity of an account.” She noted that an authentic social media account could mention various topics such as the Eagles in the Super Bowl, the Uber they took that morning, their kids, or a TV show last night. They are also likely to show a pattern in the timing of their posts — for example, a student with a fixed course schedule may be posting in between classes, she added. Also, “they will have two-way engagements with other accounts.”

On the other hand, fake accounts are more likely to have only one-way, said Yildirim. “Real people are located in a network of other real people and show a pattern of usage that is likely to differ from that of the fake account. Our online accounts are a reflection of our offline connections, so the average real person will be found, tagged and mentioned by others, will post photos and show up in others photos, and will react to unexpected events that are happening.”

With a paid account, a user does not have the incentives to exhibit all those behaviors and mimic a real person perfectly, “unless paid very well,” she noted. “Analyzing the patterns in the usage data already tells a lot … about how likely [it is that] an account is real or not.”

“We are more influenced by the people we know, people who are in our social circles, and these fake active accounts may simply not matter to many users for that reason.” –Pinar Yildirim

Secondly, the social media platforms also screen accounts because of the flags raised by other users, and thus “crowd-source the policing,” said Yildirim. Third, they hire moderators to use personal judgment to detect and curb harmful accounts, she added.

Ron Berman noted that Google faced a similar issue with websites trying to game search engine optimization (SEO) to improve their rankings on its search engine. “The solution was to focus less on just incoming and outgoing links – easy for a machine to create – and focus more on quality, diverse and relevant content, which is harder for a machine to create.”

How Bots Evade Detection

The creators of the bots try to stay a step ahead of a Facebook or Twitter that might attempt to identify them and take them down. If the detection algorithms at Facebook or Twitter find that some users are only liking certain pages that they are paid to promote, they would weed them out and close them down. “But what [the bot creators] do to trick these algorithms into believing that they are real people is to like random people that they have not been paid to like or tweet,” said David Berman.

Legitimate ways to gain traction in the so-called “attention economy” do exist. Golbeck pointed to Facebook’s marketing offerings for businesses. But Facebook or Twitter may find such services at conflict with their attempts to go after fake user pages, she said. “Facebook has a disincentive to identify and shut down some of those fake ‘liking’ accounts because, ultimately, those accounts liking your content dilutes it getting out to the real audience. It gives you more incentive to pay Facebook to boost the visibility of your post.”

Three years ago, Golbeck published lists of Russian bot networks that were posting spam and creating fake followers and wrote research papers on them. “Those fake accounts are still active on Twitter,” she said. “The last thing they want to do is take down, say, 20% of their active monthly accounts that are bought. It’s something where you can make a ton of money. It seems like it’s making up a substantial percentage of the accounts on these networks.”

Ways to Weed Out Fakes

At the same time, the leading social media companies are recognizing the dangers of fake followers and are creating new ways to check them. “YouTube, for example, has adjusted its monetization qualifications, or the rates for you to be able to make money on ads by now requiring view time,” said Golbeck. That refers to a certain number of hours per year that people watch a video. “That’s a lot harder to buy than people just liking your page. The metrics are shifting a little bit away from number of followers or number of likes to real engagements that are harder to automate, though we can certainly expect that the bots are going to catch up and find ways to do that [as well].”

Social media companies can do a better job of validating user accounts, and also borrow ideas from other online platforms that have faced similar challenges, said Hosanagar. “On the web, how does Google figure out that a page is genuine and should be ranked higher?” he asked. “They look at the number of other reputable websites that link to your website as an indicator of your online reputation or reliability. Twitter can similarly look at followership from reputed or validated accounts as an indicator.” He also pointed to eBay, which rolled out rating systems to establish a profile’s reliability and trustworthiness. “They could consider a variant of such rating systems to validate accounts.”

Hosanagar added that social media companies could also tap into research on using machine learning techniques to identify fake accounts. Those techniques look at the activities of an account (languages used versus location of account, patterns in tweeting and followership, and many other factors) to detect fake accounts, he explained. Online platforms are for sure using some of these techniques, “but they can and should invest more in these efforts,” he added. “The problem is solvable at least to an extent that fake followership and activities can brought down to less than 1% of these networks. But the question is whether they want to.”

“A lot of people don’t want the bots to go away if it means their followers and likes decrease.” –Jennifer Golbeck

Even after a bot has been detected, it is a challenge for a platform to decide  on how to deal with it, said Yildirim. “Is a paid account harmful to others or simply does not matter? We are more influenced by the people we know, people who are in our social circles, and these fake active accounts may simply not matter to many users for that reason. Secondly, is the influence of a paid follower different than the influence of a paid blogger? When a brand buys a lot of borrowers, is it different than advertising? These questions have to be carefully answered, and the platforms do not yet have answers.”

According to Ron Berman, the key to handle this problem is to focus on original content. “Twitter as well as Facebook can emphasize and give priority or weight [to] original new content created by users, instead of promoting retweets and content shares,” he said. “It will be hard – although not impossible – for firms to create bots that create high value original content, which should allow the platforms to filter out accounts that only rehash old content and send it again.”

‘Deeper Investigations’

In time, Golbeck said she expects to see “deeper investigations” to identify fake followers and the firms creating them. “We’re going to see a lot of investigations certainly on the legal side regarding fraud, but also on the political side – how is this making politicians look more popular? Bots have been an issue for Trump in a lot of ways – different ones than this, but it’s expanding that universe that we’ll look at.”

Golbeck and David Berman are not convinced that self-regulation is the solution. “It’s going to be hard to get to a place where we see internal industry regulation … to stop those bots,” said Golbeck. Added Berman: “I would not rely on the beneficence of Facebook and Twitter to regulate themselves. I think there has to be external regulation.”

According to Golbeck and David Berman, another problem in effective policing is that many regulators don’t fully understand the social media space and its abuses. “They tend to be older; they didn’t grow up with this technology,” Golbeck said. “Some of them are perfectly capable of using and understanding these [platforms], but not all of them do.”

If changes do happen, not all platform users will be happy. “A lot of people don’t want the bots to go away if it means their followers and likes decrease,” said Golbeck. She recalled “a big purge” of fake accounts a few years ago on Instagram, which caused follower numbers to drop sharply for some users and angered them, even in cases where they hadn’t bought those bots. “They were demanding that Instagram give back the fake followers because they wanted to look more popular, even though they knew they were fake.”