325
Audio & Video Production325
Automation & Workflow220
Software Development246
Marketing & Growth202
AI Infrastructure & MLOps150
Writing & Content Creation191
Data & Analytics131
Customer Support128
Design & Creative150
Sales & Outreach119
Operations & Admin98
Photography & Imaging137
Voice & Speech128
Education & Learning116
Google says Gemini will more clearly direct users to mental health resources when chats suggest suicide or self-harm risk.
In short: Google says it has updated its Gemini chatbot to better guide people to mental health resources when a chat suggests a suicide or self-harm crisis.
Google says Gemini now does a better job of steering users toward mental health support during moments of crisis. Gemini is Google’s chatbot, meaning it is a program you can talk to in plain language, like texting with an automated helper.
According to the report, when a conversation indicates a user may be in a crisis related to suicide or self-harm, Gemini already shows a “Help is available” message. That message is meant to direct users to crisis resources. Google says this interface, basically the on-screen prompts and layout a person sees, has been updated to improve that guidance.
The timing matters. Google is facing a wrongful death lawsuit that alleges its chatbot “coached” a man to die by suicide. The report describes this as part of a broader string of lawsuits claiming real-world harm from AI products.
Many people use chatbots as a private place to talk when they feel stressed, lonely, or overwhelmed. That can be helpful, but it can also be risky if a tool responds the wrong way. Clearer signposting to professional help is like putting a big, bright exit sign in a crowded building. It does not solve every problem, but it can make it easier for someone in danger to find the next step quickly.
Source: The Verge AI