stumbled in moderating self-harm content

It's important for a service primarily used by teens.

Sponsored Links

Swapna Krishna
March 12th, 2018
In this article: mobile,

One issue that many tech start-ups must face is how to deal with harmful content, especially when services start amassing a loyal following. Lip-syncing app is facing just this challenge. Writer Anastasia Basil was screening the app to see if it would be appropriate for her 10-year-old daughter. She found that the platform is rife with keywords referencing self-harm, such as #cutting and #selfhate.

BuzzFeed News took note of Basil's Medium post on the topic and contacted At that point, the service took steps to ban searches for keywords mentioned in the article. told BuzzFeed that "its process for banning terms from search is always evolving."

But the question is whether that's enough. Clearly, it took a news organization reaching out before took steps to address the issue. As a service primarily aimed at and used by teens, the company should have already considered its approach towards moderating sensitive issues like self-harm.

Back in 2016, Instagram rolled out suicide prevention tools that allowed users to report posts from people who might need help, as well as offer support options for specific hashtag searches. The company worked with the National Eating Disorders Association and the National Suicide Prevention Lifeline to craft language. Now, when a person searches for self-harm and eating disorder-focused hashtags on the service, a there's a pop-up that allows users to get support with one click. The examples are there for to follow; let's hope the company addresses these issues as proactively as is possible.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
Popular on Engadget