In a blog post, Disqus VP of People and Culture Kim Rohrer spelt out the steps the company will take. The first is the introduction of a feedback tool that allows users to highlight when a website is violating its Terms and Policies. Individual commenters can still be flagged, but if a community is exhibiting toxic language, harassment and hate as a whole then Disqus will decide whether they should be allowed to remain on its platform.
Ultimately, though, the onus will remain on publishers and moderators. Some publishers, including alt-right political sites like Breitbart, have no interest in moderating their discussion forums, but for those that want to promote free speech but eliminate trolls, Disqus says it's developing new tools to discourage them. In the future, moderators will be given the ability to shadow ban users, making their comments invisible to everyone except that person, and allowed to give commenters timeouts.
Disqus believes it can automate some of these processes by flagging content through machine learning. Algorithms could detect repeat occurrences of certain words and phrases, helping publishers by bringing toxic comments to their attention. In a nod to recent incidents on YouTube, the company will also allow advertisers to choose where their assets appear, so they don't run alongside discussions they don't want to be associated with.
"If a publication is dedicated to toxic or hateful discourse, no software or product solutions will help to combat hate speech or toxicity," Disqus said in statement. "In those cases where a site is determined to be in purposeful violation of our Terms of Service, we will assert our stance and enforce our policy."