Advertisement

Social media bots are damaging our democracy

On the internet, nobody knows you're a natural language processing system.

Social media has become our town crier. When major news breaks, roughly two-thirds of American adults now find out about it online in real-time. But the aftermath of the week's third mass shooting, environmental catastrophe or political meltdown is often rife with false claims, misinformation, and outright conspiracy theories. Some of this comes simply from the confusion surrounding the unfolding situation but to an increasing degree, the discussions around these events are being deliberately -- and effectively -- influenced by an army of autonomous digital actors.

One need only look at the apparent suicide of Jeffrey Epstein, who had been implicated in an international child-sex-trafficking-ring investigation, to see the effects of social-media bots. Within moments of the announcement, Twitter flooded with conspiracy theories surrounding Epstein's death. Unsourced assertions and hypotheses spread throughout the network faster than the actual news did, thanks in part to prodigious retweeting by automated accounts.

Social bots are algorithmic software programs designed to interact with humans, sometimes to the point of persuading them that the bot is human, or autonomously perform mundane functions such as reminding people to like and subscribe in a video's comments. Think of them as chatbots but with additional autonomy. In fact, one of the earliest bots was ELIZA, a natural language processing computer developed at MIT in 1966. It was one of the first systems to even attempt the Turing Test.

As the internet emerged in the early 1990s and IRC (internet relay chat) channels came into vogue, so too did bots. They were designed to automate specific actions, be able to respond to commands and interact with humans in the channel -- functions that have since been adapted to modern social-media platforms like Twitter and Facebook via APIs. Twitch especially leverages bots in its operations, in part given that it is built off the same technology as IRC. Their roles now include everything from responding to user queries, automatically moderating discussions and actively playing games. They've been put to use outside social media as well. Google's web crawler is a bot, as is Wikipedia's anti-vandalism system.

But on social media, they shine. A modest network of coordinating bot accounts on Twitter can massively expand the size and scope of attention a tweet receives, influence the course of a thread, and either mitigate or multiply the impact of a media event. An April 2018 study by the Pew Research Center estimates that between 9 percent and 15 percent of all Twitter accounts are automated. What's more, 66 percent of all tweeted links to popular sites were disseminated by bot accounts, though a staggering 89 percent of links to news-aggregation sites were bot sourced.

Compared with humans, these bots are relentless. The same study found that the 500 most active (suspected) bot accounts were responsible for 22 percent of tweeted links to popular news sites while the 500 most active human accounts produced barely 6 percent of the same linked tweets.

And it's not as though these bots are particularly subtle about what they're doing. A separate Pew study from October 2018 found that 66 percent of Americans are aware that these bots exist, while a whopping 80 percent of those folks believe that bots are primarily used for malicious purposes.

But what American's can't seem to do is confidently identify bots when interacting with them. Only 47 percent of respondents of the survey were very or somewhat confident they could recognize a bot account and a mere 7 percent were very confident. That's fewer folks than even the percentage of guys who think they could score a point off Serena Williams.

The fact that Americans are so gullible online does not bode well for us. "One of the big problems for the general public is we mostly believe what we see and what we're told," Frank Waddell, assistant professor at the University of Florida's College of Journalism and Communications, told Engadget. "And this is kind of amplified on social media where there's just so much information."

Increasingly, bot networks are being deployed to spread misinformation, damaging the country financially. We've already seen bot activity influence the stock market. The so-called Flash Crash on May 6th, 2010, wherein the Dow dropped 1,000 points (9 percent of its value) in minutes was caused by a flurry of automated trades by a single mutual fund's automated traders. And in 2013, the Syrian Electronic Army hacked the Associated Press Twitter account and ran a false story about then-President Obama being injured in a terrorist attack, causing the market to temporarily crash until the hoax was revealed.

asdf

These bots are even more dangerous to our democracy. "Unfortunately the news is mostly bad, these bots have been very effective in the past at shaping public opinion," Waddell continued. "They can just do more tweeting and sharing than the average person and they can do that by quite a large magnitude." By flooding a discussion with their own content, they can shape the nature of public opinion, he explained.

He points to the 2010 election as one of the earliest examples of bots used to influence political discourse. "Some people call it astroturfing, other people call it Twitter bombs," he said. "The whole purpose of it, from a political perspective, was to smear other candidates. It's meant to promote one candidate while discrediting another."

These influence campaigns can be downright insidious, argues Waddell. "Bots are may be tweeting in a way that supports how users already feel; they might already be inclined to, let's say, support or oppose gun control. And when you have Twitter bots tweeting consistently inline with [the user's] beliefs, they may or may not be realized that they're being sucked into this false consensus being manufactured." We've seen examples of this practice in the discussions surrounding Brexit, special counsel Robert Mueller's report to Congress, and the Saudi government's ham-fisted coverup attempt after murdering US-based journalist Jamal Khashoggi. It keeps happening because it's just so damn effective. Sometimes, it's even welcomed.

Just as Twitter played an outsize role in the 2012 election and Facebook did in the previous 2008 cycle, Reddit commanded an inordinate amount of influence during the 2016 presidential race -- specifically, far-right haven /r/The_Donald. As Saiph Savage, assistant professor of computer science at West Virginia University, and her co-authors found in their 2018 study, Mobilizing the Trump Train, social bots played a critical role in helping to motivate and mobilize the subreddit's adherents.

They did this by generating slang phrases that would then disseminate out, creating a common dialect within the group as well as by playing communal games with human Redditors. For example, the TrumpTrainBot would engage users by having them spout off slang phrases or reply to "accelerate" the Trump Train. After some 54,540 responses, the bot would drop messages like this into the discussion:

WE JUST CAN'T STOP WINNING, FOLKS THE TRUMP TRAIN JUST GOT 10 BILLION MPH FASTER CURRENT SPEED 175,219,385,117,000 MPH. At that rate, it would take approximately 9.209 years to travel to the Andromeda Galaxy (2.5 million light-years)!

Amazingly, despite their influence, bots only constituted around one percent of all T_D users. "We have observed that while the number of bots can be small, they usually create the most content on online forums," Savage wrote to Engadget. She notes that bots play a similar role on the Twitch platform as well.

"I believe we are seeing on Twitch and Reddit that the number of bots are not large because, on these platforms, developers have to declare these automated accounts," she continued. "As a consequence, people are likely not openly create bots to have 'sock puppets' that can trick others into believing that a large number of people support their particular cause. Rather, bots are used to help humans in particular tasks."

This effect is not limited to the internet's various thought silos and echo chambers. A 2016 study out of USC examined nearly 20 million tweets collected between September and October of that year from roughly 2.8 million users. By analyzing the behavior of these accounts, the research team estimated that "about 400,000 bots are engaged in the political discussion about the presidential election, responsible for roughly 3.8 million tweets, about one-fifth of the entire conversation."

Given that these bots have, since their inception, been designed to mimic human behavior, they've proved incredibly difficult to root out from social-media platforms. Efforts to detect and identify bot accounts are already underway. The BotOMeter project, for example, is a free online application that scans a given Twitter account, as well as those associated with it, using more than a thousand criteria to make its decision. It was developed by the Network Science Institute (IUNI) with the Center for Complex Networks and Systems Research (CNetS) at Indiana University.

asdf

Twitter itself has taken a number of proactive steps to curb the influence of bots. Since April the company has used automation to nearly double the amount of actionable abusive content that it proactively uncovers, rather than is reported by users, to 38 percent from 20 percent of total. It has also reportedly tripled the amount of abusive content it addresses within the first 24 hours.

Bots on Twitter do exhibit some common behaviors through which they might be identified. They may go through bursts of activity after long bouts of dormancy or go on multi-day marathon Like and Retweet binges. They may have a heavily skewed follower/following ratio or are only followed by a bunch of equally sketchy recently created accounts.

In short, if the account was created in March 2019, already has 143,000 posts, its handle is Barbara012490863 and up until 30 minutes ago when it started spouting anti-vaxx slogans had only posted about the NFL, you're probably arguing with a bot. You might as well be arguing with a stump. Or an Amazon Fulfilment Center Ambassador.

Due to multiple violations of our rules and guidelines including personal attacks, name-calling and off-topic conversation, this comment section is now closed.

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.