Facebook testing AI that helps spot suicidal users
It's also adding real-time self-harm prevention tools to Facebook Live and Messenger.
Facebook has unveiled new tools to help prevent suicides, pointing out that they happen every 40 seconds worldwide and are the second leading cause of death for young people. While it already has self-harm prevention features, they rely on users to spot and report friends' problematic posts. Now, the company is testing AI tech that can detect comments that are "likely to include thoughts of suicide." They can then be checked by the company's Community Operations teams, opening up a new way for troubled users to get help.
Facebook's AI is also making it easier for users to help friends in trouble. Using pattern recognition, it will check posts and then, if needed, make "suicide or self injury" reporting options more prominent. The AI detection and reporting options, whether aided by friends or Facebook employees, are running as a "limited test" in the US for now, however.
Facebook has also created new Messenger tools in collaboration with the Crisis Text Line, the National Eating Disorder Association, the National Suicide Prevention Lifeline and other organizations. That'll help at-risk users or concerned friends contact knowledgeable groups over chat either directly from the organization's page or via Facebook's suicide prevention tools. The Messenger program is also in the testing phases, but Facebook will expand it "over the next several months" so that organizations can ramp up to increased message volumes.
Finally, the social network has integrated suicide prevention tools into Facebook Live. If users see a troubling livestream, they can reach out directly to the person and report it to Facebook at the same time, as shown above. It will "also provide resources to the person reporting the live video to assist them in helping their friend," it wrote. Meanwhile, the person sharing the video will see resources that let them reach out to a friend, contact a help line or see tips.
"Some might say we should cut off the livestream, but what we've learned is cutting off the stream too early could remove the opportunity for that person to receive help," Facebook Researcher Jennifer Guadagno told Techcrunch.