Advertisement

After Christchurch, we need more than digital-security theater

Even the TSA has more credibility than Facebook, YouTube and Twitter.

Just after the Christchurch shooting I came across an article explaining how to make your Twitter, Facebook and YouTube accounts block violent videos.

How-tos like this are depressingly necessary, because while Facebook removes an illustrated nipple for "community safety" at lightning speed with real consequences, the company isn't equally interested in policing content that's indisputably harmful. After the Christchurch attack, Facebook said it took down 1.5 million postings of the terrorist's mass-murder livestream within 24 hours, but only 1.2 million of those videos were blocked at upload.

That's why I thought that article might be good to share, to help people avoid the trauma of viewing videos like that. After all, these videos are injected without consent into our online spaces with the intent to harm us, and inspire further violence.

There was one huge problem: The article's instructions require you to change your account settings to block all "unsafe" content. That includes "unsafe" art, sex ed, non-explicit adult content, nudity, sex news, LGBTQ content and more. Because content settings on Twitter, Facebook, YouTube and others all regard sex as equally unsafe as violence, it means also blocking diversity. (The same diversity that once made the internet a lot more fun.)

It also isn't going to work. That's because Facebook just showed us it cares way more about stopping people from enjoying sex than it does about stopping people from enjoying hate.

On Wednesday Facebook finally acknowledged that white nationalism, white separatism and white supremacy are all basically the same flavor of racist nonsense and said it would ban them. It won't do it immediately, which would've been the right thing to do, but at some unspecified time next week.

This came three days after the Independent published an exclusive showing that despite the Christchurch horror and its online components, Facebook still allows neo-Nazi groups because they "do not violate community standards." When the groups were brought to its attention, "Facebook refused to remove the content," it reported, "and told researchers to unfollow pages if they found them 'offensive.'"

AFP_13X5H6

While headlines hail it as an overdue triumph, let's call it out for what it is: bullshit. The public has been doing everything it can to stop Facebook from providing a safe space for neo-Nazi groups and all their coded hate for years. Even in the aftermath of Charlottesville, North Carolina, in 2017, Facebook doubled down on its policies that kept "supremacy" groups safe.

Facebook has not been specific about what pages it will prohibit or remove or how the policy will be enforced, and it has backed away from taking action on implied and coded hate. Unlike its overactive policing of nudity, human sexuality and sexual speech. You see, its policies on implied and coded sex talk are crystal clear.

Facebook did not say how many people viewed the 300,000 copies of the video (out of 1.5 million) that it didn't catch in the first 24 hours of uploads. In a Twitter poll right after the attack, 91 percent of nearly 2,000 respondents said they thought Facebook, YouTube and Twitter should delay livestreams to better moderate hate content. The poll's author, Shannon Coulter, who organizes anti-extremist product boycotts, said that discussing how to monitor at scale wasn't the point; rather it was the choice the companies made to make this possible without oversight.

Coulter added, "These companies have already enacted a successful crackdown on ISIS' use of social as a radicalizing/attention-getting tool. They know how to do it. They just haven't done it to the Nazis yet, despite that contingent being larger."

After 9/11, America entered a future of surveillance and privacy invasion in the name of safety and security. It was a blood-soaked gold rush of domestic spying, even though deep down we knew that no amount of TSA humiliation was going to make us safer. The term "security theater" was coined around all this. It is defined as "the practice of investing in countermeasures intended to provide the feeling of improved security while doing little or nothing to achieve it."

Facebook, Google and Twitter have wrecked our privacy and security while facilitating the rise of internet-savvy, right-wing attacks and extremist terrorism. At the same time, these companies have brutally policed sex, art and LGBTQ content to the detriment of peaceful, real-life, diversity-positive communities -- "for our safety."

It's impossible not to see this mess we're in as anything but the most perverse kind of security theater.

If you think that muting and removing extremist and conspiracy content is an impossible game of whack-a-mole, you'd be wrong. Watch a video on YouTube to see how quickly its algorithm begins to shovel content in your face, leading you down dark tunnels of ever-worsening white nationalist fervor. Next, take a cruise around social media sites using a German VPN to see what it looks like when hate-speech content is disallowed. Same for Facebook and Twitter. It's suddenly not a nightmare. Weird, right?

In late 2017, Germany enacted a law against online hate speech -- and has enforced it by fining companies and making real-life arrests for posting and inciting racial hatred online. The country decided Facebook and Twitter weren't doing enough to stop hate speech and propaganda -- while Facebook fought against Germany's conclusion to protect itself from the company.

If Germany has to pass a law to stop your company from creating the next Hitler... I can't think of a clearer sign saying you're doing something very, very wrong. Hell, you could even try a half-ass stopgap, like prohibiting livestreams and video uploads during terror attacks. But no.

In the aftermath of Christchurch and the failed efforts to contain the disturbing video of the attacks, a significant number of people dumped Facebook. In the most high-profile instance, AirAsia's CEO Tony Fernandes rage-quit Facebook, saying, "New Zealand was too much for me." Fernandes explained in a series of tweets that he was walking away from his 670,000 followers on the service because "Facebook could have done more to stop some of this."

And so it falls back to us to protect ourselves from Facebook. The thing about changing our settings on whatever platform is that it's a setup: We lose the edgy internet and voices we want to hear but still don't weed out the evil jerks ruining it for everyone else. It's no coincidence that the puritanism of Facebook, YouTube, Apple and even the soft puritanism of Twitter is all aligned with conservative values. Sex censorship is literally a Nazi value. So until they prove otherwise, for our own safety we should consider these companies as socially conservative with a vested interest in promoting white nationalism.

If we've learned anything from the internet in the past 20 years, it's that sex is synonymous with diversity. And its most social function is to teach the importance of consent. Specifically, informed consent: when permission is granted with explicit understanding of the consequences. Looking back at what's happened to our private data under social media's terms of service, it seems to me that it serves them all too well to keep the public misinformed about consent.

Instead we have harmful content proliferating while women, LGBTQ people, people of color and all things sex-positive are silenced in the name of "safety." You'd think that exposure to these voices would engender empathy. And you'd be right.

In a recent article examining violent online conspiracy theories in light of Christchurch, the conclusion was that those theories can't necessarily be stopped. They're part of the human condition, whether they're our fascination with Roswell and the CIA creating lesbianism or Rush Limbaugh saying that Christchurch was a setup to smear the right.

Consistently, it found, conspiracy followers "punch up." Meaning, conspiracies -- like ones motivating the QAnon murder suspect to the Christchurch shooter -- address pain points where people believe they lack power and control over their own lives. "Empower people," the article states, "give them a sense of control, operate with transparency, and conspiracy theories seem to become less appealing."

There is no need to tolerate "free speech" trolls on Twitter. The company just needs to actually enforce its rules against abuse. Facebook banning sex talk and art nudes while giving Holocaust-denial discussion groups space to fester is at best a deeply cynical interpretation of global community stewardship. At worst, it is a new form of deliberate cruelty, and a disturbing performance of the absence of empathy.

Image: SAUL LOEB/AFP/Getty Images (Zuckerberg)