Facebook made the decision Tuesday, and COO Sheryl Sandberg was among those who put together the policy. The social network will direct those who search for or post content related to those ideologies to a nonprofit called Life After Hate, which helps people leave hate groups.
"We decided that the overlap between white nationalism, separatism, and white supremacy is so extensive we really can't make a meaningful distinction between them," Brian Fishman, Facebook's counterterrorism policy director, told Motherboard. "And that's because the language and the rhetoric that is used and the ideology that it represents overlaps to a degree that it is not a meaningful distinction."
Given Facebook's difficulties in policing banned content (consider how quickly footage of the New Zealand mosque shootings proliferated across the network), it remains to be seen how effective it will be in moderating white nationalism and separatism. Indeed, it said that implied and coded messaging related to the ideologies won't be banned straight away, to an extent because it's more difficult to monitor the site for such content. Nor will Facebook prohibit more general material on separatism and nationalism outside of white separatism and nationalism, such as the Basque separatist movement.
Facebook will use machine learning and AI to detect white nationalist and separatist content, as well as a system that finds and deletes images that have previously been deemed to include hate speech. It uses similar tactics to detect and remove terrorism-related material. The decision is likely to prove controversial among some free speech advocates. It may also give those who have accused Facebook of having an anti-conservative bias further evidence to support that stance.