ChatGPT can reach out to a friend if you're at risk of self-harm
OpenAI has introduced Trusted Contact for ChatGPT, which will allow users to nominate a friend that the company can contact if they're at risk of harming themselves. More and more people have been using ChatGPT as a digital therapist, relying on the chatbot for their mental health needs. OpenAI previously told the BBC that more than a million of its 800 million weekly users express suicidal thoughts in their conversations.
Last year, OpenAI faced a wrongful death lawsuit, accusing the company of enabling a teenager's suicide. The lawsuit alleged that the teenager talked to ChatGPT about four previous attempts to end his life and then helped him plan his actual suicide. The BBC's investigation published in November 2025 found that in at least one instance, ChatGPT advised the user on how to kill herself. OpenAI told the news organization that it had improved how its chatbot responds to people in distress since then.
Trusted Contact builds off of ChatGPT's parental controls, giving adults 18 and above the option to add the details of someone who could help them in case they're on the verge of self-harming. Users will be able to nominate one adult as their Trusted Contact in ChatGPT settings, who will then have to accept the invitation they receive within one week. If they fail to accept it, the user can choose to add another contact instead. ChatGPT's system will first warn the user that the company may notify their contact if it detects a serious possibility of them hurting themselves. It will encourage the user to reach out to their friend and will even suggest potential conversation starters.
The process isn't fully automated. OpenAI says a "small team of specially trained people" will review the situation, and it's only if they determine that there's a serious risk of self-harm that ChatGPT will send the user's contact an email, a text message or in-app notification.
"[The user] may be going through a difficult time," the message will read. "As their Trusted Contact, we encourage you to check in with them." From there, the contact can view more details about the warning, telling them that OpenAI has detected a conversation wherein the user has discussed suicide. However, the company will not be sending them transcripts of the conversation for user privacy. "While no system is perfect, and a notification to a Trusted Contact may not always reflect exactly what someone is experiencing, every notification undergoes trained human review before it is sent, and we strive to review these safety notifications in under one hour," the company wrote in its announcement.
If you or someone you know is experiencing suicidal thoughts, do not hesitate to contact the National Suicide Prevention Lifeline at 1-800-273-8255. The line is open 24/7 and there's also online chat if a phone isn't available.