In March, Twitter announced that it would be working harder to encourage open, healthy, civil conversations on its platform and it asked outside experts to weigh in on the best way to do so. Today, the company described some changes it's making to how it handles content that might distort conversations but doesn't actually violate its policies. "One important issue we've been working to address is what some might refer to as 'trolls,'" David Gasca, Twitter's product manager for health, said in a blog post. "Some troll-like behavior is fun, good and humorous. What we're talking about today are troll-like behaviors that distort and detract from the public conversation on Twitter, particularly in communal areas like conversations and search."
We're committing Twitter to help increase the collective health, openness, and civility of public conversation, and to hold ourselves publicly accountable towards progress.
— jack (@jack) March 1, 2018
In cases where tweets violate policies, Twitter says it takes action. But for those other, murky examples where a violation hasn't actually occurred, Twitter is now hiding them in searches and conversations. Those tweets will still be available on Twitter, but users will have to click the "Show more replies" button in a conversation or set their searches to show all results. Twitter is using a new set of behavioral signals to determine which tweets should be handled this way and those include non-confirmed email addresses, signing up for multiple accounts at the same time, repeatedly mentioning accounts in tweets that don't follow the user back and behavior that might note a coordinated attack. "We're also looking at how accounts are connected to those that violate our rules and how they interact with each other," said Gasca.
We're tackling issues of behaviors that detract from the public conversation on Twitter. This technology focuses on behavior, not content.
You can see the full conversation by tapping Show more replies or choose to see everything in your search settings.
— Twitter Safety (@TwitterSafety) May 15, 2018
So far, tests have shown that this setup has led to a four percent drop in abuse reports from search and an eight percent drop in abuse reports from conversations. "Our work is far from done. This is only one part of our work to improve the health of the conversation and to make everyone's Twitter experience better," wrote Gasca. "This technology and our team will learn over time and will make mistakes. There will be false positives and things that we miss; our goal is to learn fast and make our processes and tools smarter. We'll continue to be open and honest about the mistakes we make and the progress we are making."