The internet's power to connect people is undisputed, but there's no barrier on what sort of people can be brought together. The question of what responsibility, if any, the companies enabling such connection bear is thorny and complex. Section 230 of the Federal Communications Decency Act offers a pretty broad immunity for online publishers, saying that these businesses are not responsible for their user's actions.
That hasn't stopped several other victim groups from attempting to force Facebook, Twitter, Google and others into action. Late last year, relatives of those murdered during the Pulse Nightclub attack filed a very similar lawsuit in Detroit federal court. As did the families of five victims of a Tel Aviv terror attack and the widow of Lloyd Carl Fields Jr, who was murdered in Jordan.
None of these cases have -- yet -- made any real progress in the courts, and it's unclear if judges will be sympathetic to their pleas. For its part, Facebook CEO Mark Zuckerberg has pledged to hire 3,000 more moderators across the year. Those moderators, while principally engaged in preventing violent videos, will also be tasked with helping the company get better at removing "hate speech and child exploitation."
For their part, a study does seem to say that Twitter's efforts to tackle extremism have been more or less successful. In early 2016, researchers found that pro-terror debate on the social network had slowed down after Twitter began mass-banning upwards of 125,000 ISIS-sympathetic accounts. Similarly, Telegram has worked to shut down messaging channels that it believes are used to propagate violence.
Although that may now be too little, too late, as reports have emerged claiming that ISIS is developing its very own social media platform.