Earlier this year Amnesty International released a report discussing what many of Twitter's female users already know: the social network is not always a great place to be if you're a woman. Now, a new study reveals the hard statistics on just how toxic the situation is. According to the report by Amnesty International and global AI software company Element AI, female journalists and politicians were abused every 30 seconds on Twitter in 2017.
The study is the largest ever examining the way women are targeted with hate speech online. Some 1.1 million abusive tweets were sent to the women, which included members of US Congress, female UK MPs and journalists employed by a variety of political websites. Using volunteers, the researchers sifted through nearly 300,000 tweets mentioning one of 778 women on their list, and noted abusive content relating to gender, race and sexuality.
The so-called "Troll Patrol" also found that black women were 84 percent more likely to be mentioned in abusive tweets than white women. "Troll Patrol means we have the data to back up what women have long been telling us -- that Twitter is a place where racism, misogyny and homophobia are allowed to flourish basically unchecked," said Milena Marin, Senior Advisor for Tactical Research at Amnesty International.
"We found that, although abuse is targeted at women across the political spectrum, women of colour were much more likely to be impacted, and black women are disproportionately targeted. Twitter's failure to crack down on this problem means it is contributing to the silencing of already marginalized voices."
The findings aren't likely to come as a surprise to Twitter -- the company has consistently publicised its alleged efforts in cleaning up the platform, with CEO Jack Dorsey recently appearing before the House Committee on Energy and Commerce to announce his intention to "increase the health of public conversation." Twitter is supposedly investing heavily in machine learning that will guard against abusive tweets, but as noted by the Financial Times, the platform remains guarded about the way its algorithms are trained and how abuse reports are handled.