352
Audio & Video Production342
Automation & Workflow221
Software Development249
Marketing & Growth191
AI Infrastructure & MLOps173
Writing & Content Creation203
Data & Analytics140
Design & Creative169
Customer Support130
Photography & Imaging155
Sales & Outreach125
Voice & Speech135
Operations & Admin87
Education & Learning130
OpenAI is adding an optional ChatGPT setting that can notify a chosen trusted person if the system detects serious self-harm or suicide risk.
In short: OpenAI is launching an opt-in “Trusted Contact” setting in ChatGPT that can alert a chosen adult if the system thinks a user may be at risk of self-harm.
OpenAI says adult ChatGPT users can now add a “Trusted Contact” in their account settings. This can be a friend, family member, or caregiver. If OpenAI’s systems detect that a user may be discussing self-harm or suicide with ChatGPT, the Trusted Contact may be notified.
The feature is optional. The person you choose must be an adult, and they must accept the invitation within a week. OpenAI says users can change or remove the contact at any time, and the Trusted Contact can also remove themselves.
OpenAI says the alert is limited on purpose. The Trusted Contact will not get chat transcripts or detailed messages. If the system flags a conversation, ChatGPT will encourage the user to reach out for help and will tell them the Trusted Contact might be notified. OpenAI says a small team of trained staff will review the situation, and then a short email, text message, or in-app notification will be sent if it is judged to be a serious safety concern.
This expands a safety option that was previously tied to teen protections. OpenAI says it also sits alongside local helpline information already shown in ChatGPT.
More people are using chatbots for personal and emotional conversations, including during difficult moments. A feature like this is meant to add a human backstop, like having an emergency contact on file, without sharing private chat details.
Source: The Verge AI