Each time, I did the only thing you can do when you find yourself abused on Instagram: I swiped on the comment, hit the block button and reported it as hate speech. Most recently, a stranger left comments on half a dozen of my photos, one of which was from two-and-a-half years ago (meaning, you'd have to spend some time digging through my archive to find it).
After spending however many minutes it took to flag each comment about my neck flaps (what?), my fivehead (fuck you) and my "hopefully-cancerous" moles (sigh), I decided I was done. Not done with Instagram, but done sharing my photos with the world. I don't like feeling like I'm giving in to the trolls, but with barely any control over who can see or comment on my posts, I don't feel safe enough on Instagram to stay public.
To be clear, this isn't about me having thin skin. Calling me gay because I posed for a picture with my future sister-in-law isn't as cutting as you think. And to the guy who thinks I'm flat-chested, well, you're just wrong. No, this is about my sense of safety. It's a creepy feeling when a stranger goes out of his way to insult me, and to not even know why.
This is about my sense of safety.
But according to a Washington Post report published Friday, I might soon be able to reopen my account to the public. After speaking with Instagram's head of public policy, the Post was able to confirm that the company is testing improved comment-moderation tools for "high volume comment threads," including the ability to block certain words and disable comments on individual posts. "High-volume threads" is, of course, a euphemism for celebrity accounts -- the Taylor Swifts and Kim Kardashians of the world -- but Instagram told the Post that these features will eventually roll out to the rest of the community as well. In addition, an Instagram spokesperson tells me that the company has a team dedicated to keeping the community safe.
That's great, but also too little, too late. This should have been a priority when Instagram launched six years ago. Even now that it is, the moderation tools are skimpy, and only famous people have the privilege of using them. Meanwhile, these features have long since been available on other social networks -- in some cases even on Facebook, which owns Instagram.
These are not esoteric tools, either; many people would benefit from them. Not just public figures like me, but everybody -- all 500 million users. Think of everyone whose posts became more widely visible just because they used a popular hashtag. And think of the teenagers who could be spared some public bullying if they, too, had finer control over their comments.
And it's really those people who Instagram should have been keeping in mind all these years. As painful as harassment has been for me, I'm at least in a privileged position: I help run a large tech-news site that frequently covers Instagram. I have the platform to write an editorial like this one, and I can email a human at the company and tell her about my experience.
In Instagram's defense, it's not the only social network with a bullying problem. Twitter has long been a hotbed for harassment, with a racist campaign against comedian Leslie Jones being the most recent high-profile example. What makes Instagram different, though, is that the solutions have always seemed painfully obvious. Instagram was correct: We do need the ability to disable comments on select posts, or all of them, for that matter. But there are so many other no-brainer solutions not mentioned in that Post report. Give us the option of approving all comments before they go live, or to allow comments only from people we follow. Oh, and add a mute function, please.
None of these features would fundamentally change the site, either. This isn't Twitter, whose entire premise is predicated on people speaking in public. For many of us, it's really about the likes.
The solutions always seemed painfully obvious.
Besides, fewer comments would be good news for Instagram as well. If you let me disable comments from randos, I would never have to block anyone or report abuse, which means no one at Instagram would have to read my harassment report to decide if the comment in question fits their murky definition of hate speech. Not that Instagram ever responds to abuse reports anyway -- it doesn't. For all I know, no one's even reading. Regardless, with granular privacy controls in place, Instagram wouldn't have to be an arbiter of hate speech, as Twitter so frequently does. I'm not sure why that didn't occur to them earlier.
What rankles most is that these tools would probably not have taken long to implement, what with all the engineers Instagram has at its disposal. Even if this were a serious undertaking, the company has already had plenty of time. Indeed, Instagram's silence on harassment has been damning. This calendar year alone, the company has taken it upon itself to block swear words in comments, add an in-photo text translator, lengthen the video limit to 60 seconds, block links to Telegram and Snapchat, and screw with the order of people's feeds. That's to say nothing of the energy the company has invested in censoring nipples, launching a failed Snapchat competitor and trying to make Instagram Direct a thing.
It's not that Instagram didn't have the time or resources before now to take on harassment -- it's that until now, it wasn't a priority.