In 2010, Anthony Elonis threatened his estranged wife by writing rants on his Facebook page such as, "There's one way to love you but a thousand ways to kill you. I'm not going to rest until your body is a mess, soaked in blood and dying from all the little cuts."
For making these threats, a federal district court sentenced him to more than three years in prison.
On June 1, the Supreme Court voided that conviction, explaining that the standard the court had used to judge whether Elonis's threats were "true threats" was not sufficient. The district court had asked jurors to consider only whether the threats would cause a reasonable person to be afraid. Chief Justice Roberts wrote that juries should also consider whether the defendant intended to make a true threat. The ruling will make it more difficult than ever to prosecute the authors of online death and rape threats.
If only everyone would follow that lead. When crimes like stalking, threatening someone with violence, calling for others to physically harm someone, and defamation take place online, they are often treated less seriously by law enforcement, friends and family, and bystanders than when they are committed in a physical, offline place.
High-profile stories over the last couple of years have raised the profile of these crimes—stories like that of Caroline Criado-Perez, a blogger and cofounder of the Women's Room website who led a campaign to put a woman on the back of the British bank note—and consequently received a deluge of death and rape threats. Or Robin William's daughter, Zelda Williams, who publicly left Twitter after harassment following her father's death. Or two reporters for the New York Times, who left their homes after their addresses were posted online in retaliation for including the "address" of the police officer in the Ferguson shooting (in reality, they included the name of a street he once lived on, which had already been published by other outlets). They received robbery and death threats. Or Anita Sarkeesian, a media critic who has been harassed online since she launched a Kickstarter campaign to fund a series of videos that explores representations of women in pop culture narratives, and canceled a speaking event at Utah State University last year after the school received a threat from someone claiming he or she would commit "the deadliest school shooting in American history" if she spoke. (This wasn't the first time someone had threatened violence at one of her events, but it was the first time that the hosting organization, due to concealed-carry laws, could not prevent someone with a permit from bringing a gun to the event.)
And yet online harassment largely goes unchecked, and not only in the cases that make headlines.
According to a study by Pew Research, 25% of young women have been sexually harassed online, and 26% have been stalked. Online harassment is a prevalent, serious problem that even the Supreme Court (albeit implicitly) agrees should be treated in cyberspace in the same way that it's treated in real life. So why is so little still being done about such a widespread problem?
Local law enforcement doesn't always know how to enforce online harassment laws.
Arguing with someone online, which is what many aggressors maintain they are doing, is not illegal. Neither is calling someone names. Harassment, stalking, threatening someone with violence, calling for others to physically harm someone—what many aggressors are actually doing—is illegal. You can be sued for defamation, invading privacy in certain ways, and intentionally inflicting emotional distress.
Frankly, police at the local level have a very hard time figuring out how to investigate it.
In some cases, crimes are more difficult to litigate online. Some states' harassment laws, for instance, only cover threats sent directly to the target. Tweeting someone's nude photo at her boss is harassment, but because it's not directed to the target specifically, it often doesn't fall under the legal definition. Coordinated harassment carried out by a cybermob is difficult to prosecute because individual actions may not themselves qualify, at least legally, as harassment, even though the group's actions together have that effect. Nonconsensual pornography has also yet to be criminalized in 34 states. All these technicalities aside, "There are tons of laws that say [online harassment] is illegal," says Citron. "We already have those laws on the books."
But those laws aren't being frequently enforced. "Frankly, police at the local level have a very hard time figuring out how to investigate it," Citron says. "And they don't want to say they don't know."
What can be done about it: The House of Representatives recently called on the Department of Justice to better enforce laws against severe online threats. Organizations like Working to Halt Online Abuse (WHOA) and National Network to End Domestic Violence have run education efforts for law enforcement officials. Organizations like these, as well as the Cyber Civil Rights Initiative and Without My Consent, are working to update stalking and revenge porn laws.
Another legal change that would benefit targets of online harassment, Citron argues, would be to allow pseudonymous litigation. That would allow victims of online harassment to press charges without bringing more attention to content, like revenge porn, that is intended to embarrass them.
Startups don't think about online harassment until it's too late.
Safety and security, if they are considered at all, are often afterthought additions to new social networks. "Harassment is a huge problem, and it's not planned for," says John Adams, the former head of safety at Twitter, who now works as a consultant to startups on safety and security. "It's very easy for them to buy 1,000 servers on Amazon and build a company, but they don't plan for privacy or security or harassment."
Harassment is a huge problem, and it's not planned for.
Its explanation? "Yik Yak's founders say the app's overnight success left them unprepared for some of the problems that have arisen since its introduction," read one recent New York Times article. It's the "whoops, we didn't realize" excuse.
What can be done about it: Startup founders who discover that their network is being used for harassment should not, at this point, be surprised.
The problem is that in the early stages of a startup's life, just keeping a site online can be a challenge. By comparison, the potential for harassment doesn't seem like a pressing issue. But it can quickly become a much bigger problem as startups grow their user bases. "Given the scale Twitter is at," Del Harvey, head of Trust and Safety at Twitter, said in a Ted Talk last year, "a one-in-a-million chance happens 500 times a day." Putting effective tools and policies in place before something bad happens is not only the right thing to do, but it can also help tech companies avoid a public relations nightmare and a reputation as a dangerous platform.
Startups aren't legally obligated to address online harassment in the way that they are obligated to address, say, copyright infringement. But the pressure to do so could, Adams argues, come from their investors. "Why don't we see more VCs making more of these plans part of their pre-funding agreements?" he asks. "'I'm not going to give you funding until you have a security plan. I'm not going to give you funding until you address online harassment in the code of conduct.'"
Tech companies have no liability for harassment on their platforms.
Business owners can be sued for injury that occurs on their physical, offline properties if conditions likely to cause harm were present—for instance, if the business built a parking lot with no lighting. But website owners cannot be sued for creating conditions under which harm is likely to occur. "[A platform] is more liable for copyright violation," says Nancy Kim, a professor at the California Western School of Law who studies how the law applies to online harassment, "than if someone makes a death threat on [its] website."
Because section 230 of the Communications Decency Act protects Internet properties from liability for content posted by their users, platforms have no legal pressure to create safe online environments. It also helps shield platforms that have been created expressly for the purpose of hosting destructive content, like revenge porn sites and anonymous gossip sites like Campus Gossip, which solicits a fee in order to take down content.
What can be done about it: If you ask any privacy advocate for a list of their greatest fears, any change to Section 230 will likely be on it. Making platforms at all liable for content that users post is seen by many activists as a slippery slope that ends in the destruction of public discourse as we know it. "If you introduce liability, these companies don't have a particularly compelling reason to let everybody talk," says Danny O'Brien, the international director of the Electronic Frontier Foundation. "What would happen is that you would see conversations, arguments about issues like Israel and Palestine being pushed off networks."
But some wonder if there isn't a way to remove protection from websites that are actively causing harm while still upholding an open Internet. Citron, for instance, has proposed exempting two specific cases from Section 230: sites that intentionally solicit content that breaks the law, such as sites that encourage posting defamatory information or inciting violence, and sites that encourage the posting of nonconsensual pornography and charge victims for its removal. "My strategy was to think of who were the very worst actors, who make a mockery of section 230," she says. Unless Twitter and Facebook undertake a very unlikely pivot, they wouldn't be affected by changes like these. Some of the more malicious gossip and porn sites that facilitate online harassment, however, would.
There are no easy technical solutions to this problem.
Social media companies do not want to implement technologies that censor their platforms by mistaking healthy debate for malicious harassment; for the same reason, companies are reluctant to automatically suspend user accounts. Besides, there is little anyone can do to prevent a malicious user from rejoining a site after his or her account has been shut down. Even if Twitter were to require users to provide a phone number when they create a new account, as it has done in some cases, it's not that hard to get a new phone number.
Disallowing anonymous accounts isn't a real solution, either; anonymity has great value for people who are, say, political dissidents or human-rights activists.
All of this makes it hard for tech companies to build truly powerful tools and features that might prevent online harassment.
Most platforms have punted the problem to users, to whom they offer tools that help block and filter harassment from users' individual news streams, like Twitter. But this doesn't really solve the problem. As Mary Anne Franks put it in The Atlantic, "This is the equivalent of responding to someone yelling in your face as you walk down the street by putting on a blindfold and earplugs."
What can be done about it: After receiving a lot of bad press about how it handled Gamergate harassment, Twitter made a rush of announcements that included a more streamlined way for users to flag abusive tweets, improved features to help individual users report threats to law enforcement, and tripled the size of the human team at Twitter that fields user reports of online harassment. Twitter's latest tool is a filter that users can turn on to automatically clean up their feeds—and, according to early reviews, this tool actually works.
The company has also been working to apply to its fight against online harassment a strategy it uses for stopping spam. Twitter is trying to identify signals that suggest an account is being used purely for harassment purposes—for example, bounced emails to the account holder, accounts that are often blocked by other users, and accounts with low follower counts—so that Twitter's system can flag those accounts automatically, and then follow up with additional steps, like verifying account email addresses.
All of this might be summarized as playing catchup. There's still a lot of work to do: For one thing, the new Twitter filters are only available to "verified" users. There's still no way to stop someone from creating multiple fake accounts with which to harass other people. People who post tweets that can do a significant amount of damage—like tweets that include the phone number and address of their target—and then delete those tweets before Twitter can suspend the account are often not held accountable for their actions.
Other platforms have experimented with solutions that overcome the technical limitations of moderation. League of Legends, for instance, created a community tribunal to punish bad actors in its system. According to Jeffery Lin, one of the games' designers, the judgments of the players coincided with developer judgments on bad behavior up to 80% of the time. Lin says that 280,000 players were "reformed" in one year, meaning they had been punished by the Tribunal but then achieved a positive standing in the community after changing their behavior.
Women and minorities are underrepresented at big tech companies and among lawmakers.
Research suggests that, in general, women are more likely to experience the most severe types of harassment. WHOA offers an optional survey to people who seek its help with their harassment cases. Of the 4,043 people who completed the survey between 2000 and 2013, 70% identified as women. In a 2006 experiment, researchers set up fake accounts with feminine and male names and launched them into chat rooms. On average, the feminine-named accounts received 100 sexually explicit or threatening messages every day; the masculine-named accounts received about 3.7 a day. According to a recent Pew study, people of all ages and genders experience harassment online. But women, in particular young women, are more likely than their male peers to face stalking and sexual harassment.
It's not any old threats, it's rape threats.
Online mobs do sometimes pick male targets. But the abuse is different. "[Online harassment towards women] is very much sexually humiliating and sexually threatening," says Citron. "It's not any old threats, it's rape threats. It's not any old privacy invasion; it's a nude photo. It's not any old defamation; it's accusing someone of having herpes and being a prostitute."
In other words, the same people—women, people of color—who are most likely to be harassed or severely harassed are also least likely to be in positions of power at platforms like Twitter, Google, and YouTube, in police stations, and among lawmakers. And the people in positions of power in those organizations are less likely to have firsthand knowledge of what it's like to be harassed online.
This is important, because experiencing something firsthand inevitably changes how, and how often, you think about it. The legal director at Scribd, Jason Bentley, for instance, changed his views on online harassment after someone who he had banned from the platform decided to target his personal life. The user claimed his daughter had been raped by Bentley, and he posted his accusation not only on Scribd, but on sites like Craigslist and Ripoff Report, where it would surface in Google searches for Bentley's name."I didn't really feel [the problem of online harassment] until that," Bentley says. "It was a lot more abstract. I was a lot more prone to accepting somebody's justification that, oh no, they're actually an advocacy journalist, this is not actually harassment."
What can be done about it: Ideally, we would create a society in which women and minorities are not underrepresented in leadership positions. And there are lots of people working on that problem.
In the meantime, targets of online harassment are helping to explain its severity to people who have never experienced the worst forms of it, by sharing personal stories—acts of bravery, considering that being public about online harassment often leads to the speaker experiencing more online harassment.
Sarkeesian, for instance, spoke about online harassment and women's representation in games at five universities and three conferences, and did 20 media appearances and interviews last year. She has also consulted with social media platforms about how they could improve their policies and platforms to prevent harassment.
Writer and performer Lindy West told a story on This American Life and in The Guardian about how a troll created a Twitter account that impersonated her dead father. Twitter CEO Dick Costolo mentioned the story in a leaked memo about harassment on the platform. "We suck at dealing with abuse and trolls on the platform, and we've sucked at it for years," he wrote. "It's no secret, and the rest of the world talks about it every day. We lose core user after core user by not addressing simple trolling issues that they face."
Why It Matters
As Costolo noted, having a reputation as a place where women receive death threats is a business problem for technology companies that are striving to keep people engaged with their platforms and attract new users. But it's an even bigger problem for society. Imagine hundreds of people hurling slurs at you, urging you to kill yourself, and threatening to kill you. Or rape you, along with very specific descriptions of how they would like to do so—and your address. Your life would be disrupted, to say the least.
Over the past few months, I spoke to dozens of women who have been the targets of online harassment. One woman I spoke with about the online harassment she endured said she was so stricken with anxiety that her partner had to deliver medication to her in bed. Another was undergoing exposure therapy, during which she had been instructed to practice doing something she enjoyed—while facing an open laptop. Another had changed her name after an ex posted her nude photos all over the Internet. A game developer whose personal information had been published online had to call her father and explain why people were calling him to tell him she was a whore. Others talked about turning down prank pizza orders or worrying about a SWAT team showing up at their homes on the basis of fake 911 calls . At the very least, the harassment was a burden on their time and energy.
"What if you were doing your job, and there were just someone in the office yelling threats at you all day?" feminist author Jaclyn Friedman, who has been a target of online harassment, said. "Even if they never followed through on them, it would still impact your quality of life, and your ability to do your job. The idea that I can't function on the Internet without people saying these vile and violent things to me is not okay, whether or not they ever actually come rape me."
If there's something crazy happening in the news, I won't comment on it.
An Internet culture that allows online harassment can restrict the speech and opportunities of its targets. Though research about how harassment impacts social media use is hard to come by, back in 2005, the Pew Internet and American Life Project study attributed a 9% decline in women's use of chat rooms to menacing comments. Anecdotally, many women admit they don't participate on platforms like Twitter as much as they might otherwise, for fear of harassment. "If there's something crazy happening in the news, I won't comment on it," says Imani Gandy, a senior legal analyst at RH Reality Check, a publication that reports on sexual and reproductive health and justice issues. "Because I know if I do comment on it, I'm just going to end up being inundated with nutjobs. I definitely self-censor a lot more than I used to because of the harassment."
In an incident that the media dubbed "donglegate," Adria Richards became a target of harassment in 2013 after she tweeted about two men who made an inappropriate joke at a technology conference and one of them, after losing his job, posted about it on Hacker News. She didn't tweet for about two years after she received her first death threat. Though she used to regularly make YouTube videos with tech "how-tos" and commentary—almost 400 of them in three years—after the harassment hit, she stopped. In the last two years, she's posted one video.
Most of us aren't going to find ourselves in the same situation that Richards was caught up in, but after hearing her story, we might hesitate to participate in public discourse, afraid that one day we'll be the ones who find ourselves in a swirling whirlwind of rage. "It could happen to anyone," says Richards, who used Twitter for five years before Donglegate without ever having a problem. "Like cancer."
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.