Advertisement

Social media has a censorship problem of its own making

When the attention economy and polite society collide.

While YouTube vloggers Diamond and Silk perjured themselves in front of the House Judiciary Committee on Thursday, their preferred social media platform was busy putting out fires of its own. Earlier this week, YouTube mistakenly removed a video by watchdog group Media Matters that debunked claims by conspiracy theorist Alex Jones about the Sandy Hook shootings being faked. Jone's videos themselves, however, were not removed. This latest takedown is just one in a series of confusing decisions that not only isn't transparent, it's causing legitimate concerns about censorship and denying the media the tools they need for accurate reporting.

Of course, some curation is valid. YouTube, to its credit, has become far more proactive in its removal of pro-ISIS propaganda. The platform removed 8 million objectionable videos in the last quarter of 2017. Of those, 6.7 million were flagged by YouTube's monitoring software, rather than a human. Some 75 percent of those machine-flagged videos were yanked before being viewed.

That said, if you're looking for unsavory porn on YT, you won't have much difficulty, like a recent Buzzfeed investigation found. Simply typing "a girl and her horse" into the platform's search bar would return as many as 20 porn videos within the first page of results -- at least it did until the story's publication on Tuesday. One of the featured videos, published by the ALL ANIMAL channel, clocked a stunning 2.3 million views by the time it was removed.

And if pro-ISIS content isn't quite your speed, getting ahold of Neo-Nazi propaganda from the Atomwaffen Division, an United States-based hate group, is nearly as easy to find as women and farm animals. As a Motherboard report from March points out, while most of the original Atomwaffen videos have been removed, many of them are still mirrored on YouTube and easily discoverable.

Which brings us back to more recent events. Within hours of her shooting rampage at YouTube HQ in San Bruno, California, Nasim Aghdam had virtually every trace of her presence on YouTube, Facebook and Twitter scrubbed. In fact, this is a fairly regular occurrence in response to acts of mass violence.

Instagram deleted the account of Nicholas Cruz, the Parkland high school shooter, shortly after his arrest. Both Devin Kelley, the guy who shot up the Sutherland Springs church in Texas, and Omar Mateen, the Pulse Nightclub gunman, had their pages removed by Facebook as well. Oddly though, the Twitter account for Dzhokhar Tsarnaev, half of the team behind the Boston Marathon bombing in 2013, remains active even after he received the death penalty for his role in the plot.

In response to Aghdam's attack, YouTube claimed that the account had been "terminated due to multiple or severe violations of YouTube's policy against spam, deceptive practices and misleading content or other Terms of Service violations." But again, that begs the question, why do Aghdam's and Cruz' content violate the platforms' TOS but Atomwaffen videos advocating that we "gas the k*kes" do not? Unfortunately, social media platforms are notoriously (and often intentionally) opaque when it comes to explaining how their enforcement actions are enacted.

But for as reticent as social media companies seem to be about removing actual examples of hate speech, they clearly don't suffer the same degree of recalcitrance when it comes to sweeping away posts or accounts that make them look bad. Despite Mark Zuckerberg's recent assurances to Congress that his site does not host hate groups, the platform didn't get around to removing two pages associated with white nationalist Richard Spencer until the middle of April.

Frustratingly, there doesn't seem to be any rhyme or reason to the platform's censorship mechanism. Like the ISIS content, Atomwaffen's posts arguably violate YouTube's same TOS regarding hate speech in that the content "promotes violence against or has the primary purpose of inciting hatred against individuals or groups based on certain attributes."

This isn't an issue exclusive to YouTube, mind you. Facebook increased its active policing efforts against pro-ISIS posts at the start of 2018, removing 1.9 million pieces of offending content -- about double what it removed in the last three months of 2017. Conversely, the social media site has hosted ads for stolen social security numbers and personal information for years.

"I am surprised how old some of the posts are and that it seems Facebook doesn't have a system in place for removing these posts on their own," independent security researcher Justin Shafer told Motherboard in April. "Posts that would have words flagged automatically by their system."

When confronted about the stolen social security number schemes on its site, Facebook took the liberty of deleting the offending ads before the Motherboard piece ran rather than even pretending to take any form of responsibility for their initial placement. And for all the times that Alex Jones has claimed that Facebook would ban him for continuing to refer to transgender folks with a pejorative, he has seemingly suffered few consequences for his xenophobic schtick.

Whether pages are removed as a PR move to distance the platform from unsavory associations with mass murderers or as a public safety service to discourage copycats, these acts of censorship are a relatively new phenomenon with little historical precedent. It would be the online equivalent of banning the Unabomber letters.

Simply deleting offending pages, rather than locking and archiving them, not only makes contemporary reporting on the subjects more difficult, they serve to whitewash the events that took place. Take the police shooting of Korryn Gaines in 2016, for example. She had barricaded herself in her Baltimore home during a confrontation with law enforcement and had been livestreaming her ordeal until Facebook, at the behest of the cops, cut the feed. Coincidentally, the police decided, immediately after the video was taken offline, to fire on Gaines -- killing her and injuring her 5-year-old son.

Facebook pulled the same shenanigans during the Philando Castile shooting, deliberately restricting access to his girlfriend's livestream as she bore witness to his summary execution at the hands of a white cop.

"News isn't just getting shared on Facebook, it's being broken on Facebook," Reem Suleiman, a campaigner for the SumOfUs consumer advocacy group, told CNET in 2016. Her organization is pressuring the social media site to stop censoring live feeds at the request of law enforcement. "If Facebook is making decisions about how news reaches the public then it needs to be transparent about how those decisions are made."

By controlling access to the posts and videos made by society's monsters, social media companies are effectively controlling the narrative of what transpired and presenting a sterilized version of events wherein they are taken to task over the role that their services played in the shooters' development and radicalization. History will not look kindly upon them for that.