Advertisement

Sarah Nyberg's Twitter bot feeds the emptiness of alt-right trolls

@Arguetron is ready and waiting for abusers to waste hours tweeting at a script.

Reuters/Denis Balibouse

Engaging with "alt-right" Pepe-spewing racists on Twitter is a diversion I have yet to tire of, but the fact is even I can't tweet enough satisfy the masses. Fortunately, automating the process is a viable option, as shown by writer Sarah Nyberg's @Arguetron Twitter bot.

It's not the first such scripted process to hit social media (according to Nyberg, her inspiration came from a number of similar bots created by Nora Reed including @opinions_good and @good_opinions.), but it does have a remarkably deep capability to create benign but baiting responses. One egg avatar'd tweeter determined to defend the honor of a not-as-popular-as-his-follower-count-suggests alt-righter went back and forth with Arguetron for about ten hours without catching on.

It's not exactly a coincidence that Nyberg has been able to create a language so familiar and responsive to these elements, as she was a member of the same online communities that birthed so many of them. Like many of us with a background in 90s/00s chatrooms and forums, the nature internet arguments comes easy, however, some of us grew up to temper that with some amount of respect for humanity in general. Some have not, and in a turn, Nyberg has been targeted by Gamergate-related harassment over the last couple of years.

Tweets by arguetron

Still, Arguetron is by design not abusive or malicious in its tweets, and does not actively seek out adversaries. That's in contrast to some bots, like Nigel Leck's 2010 project @AI_AGW, which hunted down global warming deniers to provide automated fact-based responses explaining the science. One Hacker News commenter described it at the time as a "pro-active search engine," able to answer questions people didn't even know they needed correcting on -- particularly interesting given the current trend of messaging bots launched by Google, Facebook and others to do just that. Other examples include the SNAP_R bot that security researchers used to phish Twitter users, and @BrandLover7 which absolutely loves your product.

Female robot holding mirror with face of woman reflection

Thomas Kuhlenbeck

I chatted with Sarah, and she explained that a big part of the motivation is not to engage in harassing behavior, but to "expose reactionaries and harassers." Since the bot doesn't automatically tweet at anyone, it only picks arguments with the folks who are searching Twitter for keywords to argue. As she puts it, "I'd like the project to help people critically look at how toxic Twitter can be, especially for people expressing these kinds of opinions. That it also makes the people engaging in this sort of behavior looks ridiculous is a nice side effect."

No matter what ends up happening to Twitter, it would be nice if whoever controls it took a look at these behaviors and applied it to addressing abuse on the platform. Unfortunately, I think there's little indication that will happen under its current administration. Of course, if any of those Silicon Valley companies working on bots need a side project, assigning everyone an AI might be a worthwhile 20 percent project. While it can't address the very real issues of stalking and harassment that affects our safety, at least this way trolls get the attention they so clearly crave and the rest of us keep the time they're hoping to steal. It's a win-win.