abuse
Latest
Twitter wants to ‘increase the health of public conversation’
Twitter doesn't only want to be more transparent about the toxic content on its site, it also wants to be more proactive about removing it altogether. Speaking to the House Committee on Energy and Commerce today, as part of a hearing titled "Twitter: Transparency and Accountability," CEO Jack Dorsey said that his company's singular objective is to "increase the health of public conversation." But he said this isn't just about spotting and removing abusive content like harassment, or blocking suspicious accounts. It's also about doing so in a timely, more proactive manner. As it stands, Dorsey said, Twitter relies heavily on users reporting others' bad behavior and that simply needs to change.
Twitter is considering a transparency report on suspended accounts
As part of his testimony before the Senate Intelligence Committee today, in a hearing titled "Foreign Influence Operations' Use of Social Media Platforms," Twitter CEO Jack Dorsey said that his company is exploring the idea of a transparency report for suspended accounts. He said that, while details of what this document would look like or what information exactly it could include are still being worked out, it's something that's heavily on his mind. Twitter already has a bi-annual transparency report which discloses how many accounts it removes for promoting terrorism, and Dorsey said doing something similar for suspended accounts would only be a matter of figuring out its implementation.
Watch tomorrow's Jack Dorsey congressional hearings right here
Another round of social media congressional hearings is upon us. This time, it's Twitter CEO Jack Dorsey's turn, who will be testifying alongside Facebook's Chief Operating Officer Sheryl Sandberg before the Senate Intelligence Committee on September 5th. But that hearing, which will focus on foreign election interference, won't be the only one of the day for Dorsey. He's also set to testify alone in a hearing from the House Energy and Commerce Committee entitled "Twitter: Transparency and Accountability." There, he'll be asked questions about how the company's algorithms work to filter out abuse, as well as its decision-making process when it blocks certain content (and accounts) from appearing on its site.
Unpaid and abused: Moderators speak out against Reddit
This article was produced in partnership with Point, a YouTube channel for investigative journalism. It discusses topics that you may find upsetting and contains strong language and racial slurs. Somewhere out there, a man wants to rape Emily. She knows this because he was painfully clear in typing out his threat. In fact, he's just one of a group of people who wish her harm. For the past four years, Emily has volunteered to moderate the content on several sizable subreddits -- large online discussion forums -- including r/news, with 16.3 million subscribers, and r/london, with 114,000 subscribers. But Reddit users don't like to be moderated.
Ruby Rose is the latest celebrity driven off Twitter by abuse
It's only been a few days since Ruby Rose was cast as "Batwoman" for the CW series Arrowverse, but the Australian actor has already had to quit Twitter after a stream of abusive messages. Much of the backlash has focused on accusations that Rose -- who identifies as gender fluid and is a prominent LGBTQ activist -- isn't "gay enough" to play the role of Kate Kane (aka Batwoman) who is a lesbian in the comic books. Some others took issue with the fact that, unlike the character in the comics, Rose isn't Jewish. Still others simply questioned her acting ability, leading to the creation of #recastbatwoman campaign on social media.
Domestic abusers are exploiting smart home devices
Smart home devices are supposed to make life easier, but it's now apparent that their convenience carries unintended consequences for domestic abuse survivors. The New York Times has conducted interviews showing that abusers are exploiting smart speakers, security cameras, doorbells and other connected devices to control, harass and stalk their targets. The perpetrators will not only spy on their partners, but cause havoc with bursts of music, sudden changes in lighting or temperature and other attempts at intimidation.
Twitter acquires online safety company to bolster anti-abuse efforts
Twitter has repeatedly come under fire for not doing enough to stop hate speech, allowing outside groups to sow political discord and failing to limit the spread of misinformation. To address these issues, the company announced earlier this year that it was looking for outside experts to help in its effort to promote healthy, open and civil conversations on its platform. Now, it's acquiring a company that might be able to boost those efforts internally.
Twitter will hide tweets from annoying trolls
In March, Twitter announced that it would be working harder to encourage open, healthy, civil conversations on its platform and it asked outside experts to weigh in on the best way to do so. Today, the company described some changes it's making to how it handles content that might distort conversations but doesn't actually violate its policies. "One important issue we've been working to address is what some might refer to as 'trolls,'" David Gasca, Twitter's product manager for health, said in a blog post. "Some troll-like behavior is fun, good and humorous. What we're talking about today are troll-like behaviors that distort and detract from the public conversation on Twitter, particularly in communal areas like conversations and search."
Zuckerberg apologizes for Facebook's response to Myanmar conflict
Mark Zuckerberg has been accused of keeping too quiet on the many issues affecting Facebook recently, so Myanmar activists were surprised when they received a personal response from the chief exec following their open letter criticizing his approach to hate speech in their conflict-stricken country.
Twitter will show users its rules to discourage abuse
Twitter just promised to watch breaking events closely to curb trolling and fake news, but how does it stop users from getting trouble into first place? The answer might be simple: show people the rules before they do something wrong. It's launching a study that will try publicizing its rules to see if it "improves civility." Research shows that people are more likely to honor rules if they can clearly see them, Twitter argued, so it stands to reason that the same would be true for social networks.
Twitter: Banning world leaders would ‘hide important information’
In a blog post today, Twitter made an attempt at clarifying its stance on how political figures and world leaders use its platform. Many have called for the site to block Donald Trump as he has repeatedly tweeted violent and threatening posts, and Twitter has often stumbled through its explanations of why it hasn't done so. In its post today, the company says that blocking leaders or deleting their tweets would ultimately limit important conversations. "Blocking a world leader from Twitter or removing their controversial Tweets, would hide important information people should be able to see and debate," it said. "It would also not silence that leader, but it would certainly hamper necessary discussion around their words and actions."
Facebook introduces new tools to fight online harassment
Facebook has been rightfully criticized for how it has handled (or not handled) harassment and abuse in the past. But today, the company announced a couple of new tools aimed at fighting online harassment and giving users more control over who can interact with them.
Instagram warns you if posts show harm to animals or nature
Protecting wildlife and sensitive natural areas is hard enough as it is, and it's not helping that every brain-dead tourist wants to post a selfie with a koala bear or dolphin. Starting today, Instagram is making it harder to find such content. If you search hashtags associated with images that could harm wildlife or the environment, it will post a warning before letting you proceed. "I think it's important for the community right now to be more aware," Instagram's Emily Cain told National Geographic. "We're trying to do our part to educate them."
Twitter halts verification after backlash over Charlottesville organizer
Over the last month, Twitter seemed to finally wake up to the need to fight the rampant hate speech and abuse that happens on its platform. That made yesterday's decision to verify Jason Kessler, the leader of the white supremacist rally that took place in Charlottesville, VA this summer, incredibly odd (or incredibly foolish). Regardless, the blue checkmark is firmly in place on Kessler's account, and users pretty quickly told Twitter CEO Jack Dorsey they weren't happy about this move. This morning, Twitter responded -- not by revoking Kessler's verified status, but by saying it was pausing all general verifications to resolve confusion around what being verified really means. "Verification was meant to authenticate identity & voice," reads a tweet on the company's support account, "but it is interpreted as an endorsement or an indicator of importance."
Twitter clarifies what behavior will get you banned
Twitter's been in the headlines for years for its poor handling of abuse on the platform. Recently, the social media service has been rolling out new tools to fight abuse. Today, Twitter's Safety team published a new version of its rules to clarify policies for the social media service. The company said that basic guidelines and approaches haven't changed; this is Twitter attempting to be more transparent and clear in how it handles abuse on the platform.
Now Twitter's quest to become a 'safer' place has a schedule
You no longer have to wonder when you'll see Twitter implement the new rules promised by its CEO and outlined in that leaked email. The social network has released a "Safety Calendar," which details when it will roll out a series of new rules to make the platform a safer place. As the internal email said, the company plans to crack down on hate and violence on its website: on November 3rd, it promises to start suspending accounts of "organizations that use violence to advance their cause."
Internal Twitter email explains its new plans to fight abuse
Twitter promised stricter rules for abuse and hate in the wake of a boycott, but what will those rules entail, exactly? It's a bit clearer after today. Wired has obtained email providing early details on new policies, and they're mostly good news -- although they probably won't satisfy some people. Most notably, it's planning to crack down against all groups that "have historically used violence as a means to advance their cause" rather than focusing primarily on terrorism. It'll also take action against tweets that glorify violence, not just direct threats. There's no guarantee that this will lead to bans and suspensions against hate groups (Twitter is still hashing out the details), but that's what the early language implies.
#WomenBoycottTwitter protest spreads across social media
Twitter has been home to many hashtag campaigns, but tonight one is trending that's a little different: #WomenBoycottTwitter. Sparked by its "temporary lock" of Rose McGowan's account yesterday, the movement's purpose is to go dark "In solidarity w @rosemcgowan and all the victims of hate and harassment Twitter fails to support." Participants include fellow celebrities, women who have experienced harassment online and men who support its cause.
'Pharma bro' Shkreli ordered to jail over internet harassment
Martin Shkreli is learning the hard way that his eagerness to harass others has consequences beyond social networking bans. Judge Kiyo Matsumoto has ordered the price-gouging (and most recently, securities fraud convict) "pharma bro" CEO to jail over the Facebook post he wrote offering $5,000 to whoever could get him a strand of Hillary Clinton's hair. Shkreli and his lawyer maintained that the post was satire protected by free speech, but Judge Matsumoto didn't buy it. This was "solicitation of assault," she said in her decision, adding that it wasn't funny to effectively issue a threat.
UK says online hate crime is as serious as offline offences
The UK's Crown Prosecution Service (CPS) today laid out a renewed commitment to tackling hate crime, including making sure that online offences are being dealt with appropriately. In its public statements, the CPS affirmed that cases of digital hate crime will be treated "with same robust and proactive approach used with offline offending," and that there is no difference in the serious of such crimes.