Twitter is working on a new tool that will automatically detect and block abusive accounts for users, the company said during its recent 2021 Analyst Day presentation. Twitter didn't provide a lot of details on the feature. However, two screenshots the company shared show a new "Safety mode" option users will be able to toggle to "automatically block accounts that appear the Twitter Rules, and mute accounts that might be using insults, name-calling, strong language, or hateful remarks." According to the slides, the feature will limit someone's ability to tweet at you for seven days, in addition to showing their replies to fewer people. Twitter will push a notification to your phone to tell you when the tool is doing its job.
A feature like this has been a long time coming. Historically, Twitter hasn't done the best job at policing its platform, leading to rampant harassment of some individuals. Things eventually came to a head when, back in 2018, Jack Dorsey was asked by Congress why it took the company more than five hours to remove an abusive tweet aimed at Meghan McCain. At the time, Dorsey promised the company would take a more proactive approach to the problem. Since then, Twitter has introduced moderation algorithms it said have been detecting more than 50 percent of abusive tweets before users flag them. Twitter did not share a timeline for when it plans to roll out Safety mode to users.