Tech companies are getting better at managing hate speech, according to the European Commission (EC). The findings come as part of the EC’s fifth evaluation of the 2016 Code of Conduct on countering hate speech online. The code — which isn’t legally binding — was signed by Facebook, Twitter, YouTube and Microsoft four years ago, with each committing to ensuring their platforms "do not offer opportunities for illegal online hate speech to spread virally." Since then Google, Snapchat and Instagram have also signed up.
The latest review builds on improvements seen in recent years. The EC found that on average, 90 percent of flagged content was assessed by the platforms within 24 hours, compared to just 40 percent back in 2016, and to 81 percent in 2018. This is particularly notable as a lot of groundwork for adhering to the code was set prior to 2018, so this demonstrates consistent and ongoing progress. Of content deemed to be illegal hate speech, 71 percent was removed in 2020, whereas just 28 percent was taken down in 2016. Platforms were also found to respond and give feedback to 67.1 percent of notifications received, slightly higher than the previous figure of 65.4 percent. Although it’s only Facebook that informs its users systematically — the EC says there’s room for improvement on all other platforms.
As Didier Reynders, Commissioner for Justice, said, "I welcome these good results. We should, however, not satisfy ourselves with these improvements and we should continue the good work. I urge the platforms to close the gaps observed in most recent evaluations, in particular on providing feedback to users and transparency.” Reynders said that the forthcoming Digital Services Act should help in this respect, and that the EC is also considering taking “binding transparency measures” against platforms to clarify how they deal with hate speech.