Advertisement

Two Supreme Court cases could upend the rules of the internet

Section 230 is on the docket, and it could change content moderation as we know it.

REUTERS/Jonathan Ernst

The Supreme Court could soon redefine the rules of the internet as we know it. This week, the court will hear two cases, Gonzalez v. Google and Twitter v. Taamneh, that give it an opportunity to drastically change the rules of speech online.

Both cases deal with how online platforms have handled terrorist content. And both have sparked deep concerns about the future of content moderation, algorithms and censorship.

Section 230 and Gonzalez v. Google

If you’ve spent any time following the various culture wars associated with free speech online over the last several years, you’ve probably heard of Section 230. Sometimes referred to as the “the twenty-six words that invented the internet,” Section 230 is a clause of the Communications Decency Act that shields online platforms from liability for their users' actions. It also protects companies’ ability to moderate what appears on their platforms.

Without these protections, Section 230 defenders argue, the internet as we know couldn’t exist. But the law has also come under scrutiny the last several years amid a larger reckoning with Big Tech’s impact on society. Broadly, those on the right favor repealing Section 230 because they claim it enables censorship, while some on the left have said it allows tech giants to avoid responsibility for the societal harms caused by their platforms. But even among those seeking to amend or dismantle Section 230, there’s been little agreement about specific reforms.

Section 230 also lies at the heart of Gonzalez v. Google, which the Supreme Court will hear on February 21st. The case, brought by family members of a victim of the 2015 Paris terrorist attack, argues that Google violated US anti-terrorism laws when ISIS videos appeared in YouTube’s recommendations. Section 230 protections, according to the suit, should not apply because YouTube’s algorithms suggested the videos.

“It basically boils down to saying platforms are not liable for content posted by ISIS, but they are liable for recommendation algorithms that promoted that content,” said Daphne Keller, who directs the Program on Platform Regulation at Stanford's Cyber Policy Center, during a recent panel discussing the case.

That may seem like a relatively narrow distinction, but algorithms underpin almost every aspect of the modern internet. So the Supreme Court’s ruling could have an enormous impact not just on Google, but on nearly every company operating online. If the court sides against Google, then “it could mean that online platforms would have to change the way they operate to avoid being held liable for the content that is promoted on their sites,” the Bipartisan Policy Center, a Washington-based think tank, explains. Some have speculated that platforms could be forced to do away with any kind of ranking at all, or would have to engage in content moderation so aggressive it would eliminate all but the most banal, least controversial content.

“I think it is correct that this opinion will be the most important Supreme Court opinion about the internet, possibly ever,” University of Minnesota law professor Alan Rozenshtein said during the same panel, hosted by the Brookings Institution.

That’s why dozens of other platforms, civil society groups and even the original authors of Section 230 have weighed in, via “friend of the court” briefs, in support of Google. In its brief, Reddit argued that eroding 230 protections for recommendation algorithms could threaten the existence of any platform that, like Reddit, relies on user-generated content.

“Section 230 protects Reddit, as well as Reddit’s volunteer moderators and users, when they promote and recommend, or remove, digital content created by others,” Reddit states in its filing. “Without robust Section 230 protection, Internet users — not just companies — would face many more lawsuits from plaintiffs claiming to be aggrieved by everyday content moderation decisions.”

Yelp, which has spent much of the last several years advocating for antitrust action against Google, shared similar concerns. “If Yelp could not analyze and recommend reviews without facing liability, those costs of submitting fraudulent reviews would disappear,” the company argues. “If Yelp had to display every submitted review, without the editorial freedom Section 230 provides to algorithmically recommend some over others for consumers, business owners could submit hundreds of positive reviews for their own business with little effort or risk of a penalty.”

Meta, on the other hand, argues that a ruling finding 230 doesn’t apply to recommendation algorithms would lead to platforms suppressing more “unpopular” speech. Interestingly, this argument would seem to play into the right’s anxieties about censorship. “If online services risk substantial liability for disseminating third-party content … but not for removing third-party content, they will inevitably err on the side of removing content that comes anywhere close to the potential liability line,” the company writes. “Those incentives will take a particularly heavy toll on content that challenges the consensus or expresses an unpopular viewpoint.”

Twitter v. Taamneh

The day after the Supreme Court hears arguments in Gonzalez v. Google, it will hear yet another case with potentially huge consequences for the way online speech is moderated: Twitter v. Taamneh. And while the case doesn’t directly deal with Section 230, the case is similar to Gonzalez v. Google in a few important ways.

Like Gonzalez, the case was brought by the family of a victim of a terrorist attack. And, like Gonzalez, family members of the victim are using US anti-terrorism laws to hold Twitter, Google and Facebook accountable, arguing that the platforms aided terrorist organizations by failing to remove ISIS content from their services. As with the earlier case, the worry from tech platforms and advocacy groups is that a ruling against Twitter would have profound consequences for social media platforms and publishers.

“There are implications on content moderation and whether companies could be liable for violence, criminal, or defamatory activity promoted on their websites,” the Bipartisan Policy Center says of the case. If the Supreme Court were to agree that the platforms were liable, then “greater content moderation policies and restrictions on content publishing would need to be implemented, or this will incentivize platforms to apply no content moderation to avoid awareness.”

And, as the Electronic Frontier Foundation noted in its filing in support of Twitter, platforms “will be compelled to take extreme and speech-chilling steps to insulate themselves from potential liability.”

There could even be potential ramifications for companies whose services are primarily operated offline. “If a company can be held liable for a terrorist organization’s actions simply because it allowed that organization’s members to use its products on the same terms as any other consumer, then the implications could be astonishing,” Vox writes.

What’s next

It’s going to be several more months before we know the outcome of either of these cases, though analysts will be closely watching the proceedings to get a hint of where the justices may be leaning. It’s also worth noting that these aren’t the only pivotal cases concerning social media and online speech.

There are two other cases, related to restrictive social media laws out of Florida and Texas, that might end up at the Supreme Court as well. Both of those could also have significant consequences for online content moderation.

In the meantime, many advocates argue that Section 230 reform is best left to Congress, not the courts. As Jeff Kosseff, a law professor at the US Naval Academy who literally wrote the book about Section 230, recently wrote, cases like Gonzalez “challenge us to have a national conversation about tough questions involving free speech, content moderation, and online harms.” But, he argues, the decision should be up to the branch of government where the law originated.

“Perhaps Congress will determine that too many harms have proliferated under Section 230, and amend the statute to increase liability for algorithmically promoted content. Such a proposal would face its own set of costs and benefits, but it is a decision for Congress, not the courts.”

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.