OpenAI is introducing a new safety feature inside ChatGPT designed for situations involving possible self-harm, marking one of the company’s most direct attempts yet to connect AI users with real-world human support during moments of emotional crisis.
The new feature, called “Trusted Contact,” allows adult users to nominate someone they trust, such as a family member, friend, or caregiver, who may be notified if ChatGPT detects signs of serious self-harm risk during conversations.
The rollout reflects growing pressure on AI companies to address how chatbots handle emotionally vulnerable users as generative AI becomes increasingly integrated into daily life.
According to OpenAI, the feature is optional and available only to adult users.
A person using ChatGPT can manually add one trusted adult contact inside account settings. The selected contact receives an invitation and must approve participation before the feature becomes active.
If OpenAI’s automated systems later detect that a conversation may involve serious self-harm risk, ChatGPT may encourage the user to contact that trusted person directly.
The company says trained human reviewers then assess the situation before any alert is sent. If reviewers determine there may be a significant safety concern, the trusted contact can receive a short notification through email, SMS, or in-app messaging.
Importantly, OpenAI says the notification does not include conversation transcripts or detailed chat content.
Instead, the alert simply indicates that the user may be experiencing emotional distress and encourages the trusted person to check in.
The company repeatedly emphasizes that Trusted Contact is not meant to replace professional care or emergency services.
OpenAI says ChatGPT will still recommend:
The larger philosophy behind the feature is becoming clearer: OpenAI increasingly wants AI systems to function as bridges toward human intervention rather than isolated emotional companions.
“Our goal is to design systems that respond thoughtfully to sensitive conversations and encourage people to connect with real-world help,” the company wrote in its official announcement.
The rollout arrives during a period of intense public pressure around chatbot safety.
Over the past year, multiple lawsuits, investigations, and academic studies have raised concerns about how AI systems interact with emotionally vulnerable users.
Recent legal actions involving OpenAI and other chatbot companies have alleged that conversational AI systems sometimes:
OpenAI itself has faced scrutiny after lawsuits and investigations connected to harmful chatbot interactions, including cases involving emotional manipulation and violent ideation.
The company appears increasingly aware that as users rely on AI for personal conversations, the safety expectations around these systems become much closer to those applied to social platforms or even healthcare-adjacent tools.
The feature also highlights how AI systems are drifting into a role many companies originally did not intend.
Millions of people now use chatbots for:
That creates difficult questions for companies building general-purpose AI systems.
Unlike traditional search engines, conversational AI often feels emotionally responsive and persistent. Users increasingly treat chatbots as companions, coaches, or confidants rather than simple software interfaces.
Researchers and mental health experts have warned that this dynamic can create emotional attachment, especially among vulnerable users.
Trusted Contact appears designed partly to counterbalance that risk by reconnecting users with human relationships outside the AI system itself.
One notable aspect of the feature is that OpenAI says notifications are not sent automatically.
The company claims that a trained human review team evaluates potential cases before alerts are triggered.
That matters because self-harm detection remains technically difficult and highly sensitive.
False positives could damage trust and privacy, while false negatives could miss serious risks entirely.
OpenAI says it aims to review serious safety notifications within roughly one hour, though it acknowledges the system may not perfectly reflect a person’s actual condition.
Even though the feature is optional, it raises major privacy questions.
AI conversations increasingly contain deeply personal information involving:
OpenAI has repeatedly argued that AI conversations deserve stronger privacy protections similar to doctor-patient or lawyer-client interactions.
The Trusted Contact system attempts to balance that privacy philosophy with intervention mechanisms during high-risk situations.
Still, critics may question:
As AI systems become more emotionally integrated into people’s lives, these governance questions are becoming increasingly important.
OpenAI is not alone in adding mental health safeguards.
Meta recently introduced systems that alert parents if teens repeatedly search for suicide or self-harm content on Instagram.
OpenAI itself has also expanded teen safety policies, parental controls, and safety guidance for developers over the past year.
The trend suggests AI companies increasingly expect governments and regulators to demand stronger intervention systems around vulnerable users.
The Trusted Contact rollout reflects a much larger transformation happening in AI.
Generative AI companies are no longer just building productivity tools or chat interfaces.
They are increasingly being forced into roles involving:
Because as chatbots become more human-like, society is starting to expect them to handle human vulnerability responsibly too.
Discussion