Twitter and Facebook have taken online harassment more seriously in 2016, though their approach is still light-handed, with the former taking years to finally ban one of its worst pitchfork-marshaling demagogues. Instead of coming down harder on moderation, Instagram is putting abuse-prevention in the hands of its users. They will soon be able to set up word filters, letting them control the tone of discussion below each image — or turn off comments in a post entirely. But is it enough to just let folks block triggering phrases?
The feature is already being tested on "high-profile" celebrity accounts, which conceivably field a large volume of comments from other users. It will officially roll out to those first and then to the masses in the next few weeks, according to The Washington Post.
It's a tough balance to keep online communities safe but not overly restrict free speech: Land too hard on one side and you'll enrage proponents of the other. But social networks' previous hands-off strategy letting the community sort itself out has brought accusations of complicity when they don't prevent harassment and abuse. Letting users block certain offensive or inflammatory words will hopefully prevent some escalation or term-specific targeting.
Of course, trolls and haters shielded with the anonymity of the internet will probably find a way around the block in the same way they have since AOL chatrooms got parental filters: misspellings, euphemisms, and coded language. The exact bigoted or derogatory terms might be banned, but determined thugs will always find a way to get their words heard.
It's also unclear from the Post's report whether Instagram will let users block comments only on a per-post basis, or if they will get to turn them off entirely. Meanwhile, we're still waiting for a feature Facebook has had for years: allowing your posts to be visible or accessible by certain friends — or invisible to known trolls.