A year after it began testing prompts that asked users to “rethink” mean replies, Twitter is expanding the feature to all its iOS and Android users who use the app in English. The feature, similar to anti-bullying measures from other companies, detects “potentially harmful or offensive” replies, and nudges users to change their tweet before sending it.
The company says it’s made improvements over the last year to reduce cases when people might see the prompts unnecessarily. For example, its algorithms now factor in “the nature of the relationship between the author and replier” as people who know each other may be more likely to make jokes or communicate differently than strangers. The system can also “better account for situations in which language may be reclaimed by underrepresented communities and used in non-harmful ways.”
The prompts are one of several updates Twitter has made to reduce bullying and harassment and spur “healthier conversations.” The company notes that tests of the feature have shown some success, with 34 percent of people who received prompts opting to reconsider their original reply. That may not sound particularly high, but Twitter notes the prompts may have had some downstream effects that resulted in fewer offensive replies in the future.