The company said there had been evidence of this open approach working elsewhere, and contended that transparency was important. "Hiding things doesn't automatically make them disappear," Gadde said. However, the company didn't say that it was leaning one way or the other.
Proponents of both sides have been vocal. Free speech advocates have long argued that bans merely drive hatemongers underground, and might not deter them at all if they switch to anything-goes platforms like Gab. They've also warned that these crackdowns might punish the wrong people, taking out those on the very fringe and stifling expression instead of addressing the root cause of the problem.
However, others have contended that there's evidence this sunshine-as-disinfectant approach doesn't work. Twitter, for example, has had white supremacists on its site for years, with harassment and threats a regular part of that presence. If they were receptive to moderate voices, wouldn't there have been signs of them mending their ways? Media Matters' Angelo Carusone pointed out to Motherboard that white supremacists are frequently involved in collective harassment campaigns and running fake accounts -- they're still living in a bubble. If Twitter let them stay, it might just be giving them a wider audience than they'd otherwise have.
Whatever the solution, the study may be overdue. It's coming years after the issue first surfaced, and at a point when hate groups appear to be growing in confidence. It also follows Facebook's decision to take a hardline stance against hate purveyors after years of letting them stay on its own platform. Whether its choice is right or wrong, Facebook appears to have had this debate before -- Twitter would be late to the discussion.