OpenAI adds Trusted Contact alert to ChatGPT

According to OpenAI's blog post, the company is starting to roll out Trusted Contact, an opt-in ChatGPT feature that lets adult users nominate a trusted person who may be notified if automated systems and trained reviewers detect discussion of self-harm or suicide. OpenAI's post states the feature is available to adults 18+ globally and 19+ in South Korea, requires the nominated contact to accept an invitation within one week, and allows users and contacts to remove the connection from settings. The Verge and OpenAI say notifications are intentionally limited and will not include chat transcripts. Reporting from Futurism and an OECD.ai monitor highlight privacy and policy questions, and Futurism notes the announcement follows public reporting and litigation related to AI-linked mental-health incidents.
What happened
According to OpenAI's blog post dated May 7, 2026, Trusted Contact is an opt-in safety feature in ChatGPT that allows adult users to add one adult contact who may be notified if OpenAI's automated systems and trained reviewers detect discussion of self-harm in a way that indicates a serious safety concern. OpenAI's post states the feature is rolling out starting May 7, 2026, and that the nominated contact must accept an invitation within one week for the connection to become active. The blog post specifies age limits of 18+ globally and 19+ in South Korea, and says users and Trusted Contacts can remove or edit the relationship in account settings.
Technical details
According to OpenAI's announcement, notifications are "intentionally limited" and will not include chat transcripts or share full conversation histories with Trusted Contacts. The post also references clinical guidance on social connection and cites a quote from Dr. Arthur Evans of the American Psychological Association on the importance of trusted persons during crises. The Verge reports that OpenAI described involvement of a "small team of specially trained people" in the process; OpenAI's post frames the feature as one layer of safeguards alongside existing crisis guidance in ChatGPT.
Editorial analysis - technical context
Detection of self-harm risk in conversational AI typically relies on classifiers and heuristics trained on labeled examples, plus escalation rules for borderline cases. Companies deploying similar alert systems face trade-offs between sensitivity and false positives, and must define precise thresholds for escalation to avoid over-notification or missed events. For practitioners, implementing such pipelines requires attention to model calibration, human-review workflows, logging and auditability, and privacy-preserving data handling. Industry-pattern observations show that operationalizing human review at scale also demands analyst training, secure access controls, and clear retention policies.
Context and significance
Futurism and other reporting place this feature in the context of ongoing scrutiny of AI-related safety incidents and litigation, noting public cases and lawsuits tied to ChatGPT interactions. An OECD.ai monitor characterizes the development as an AI hazard because it creates plausible privacy and safety risks if distress is misidentified or sensitive data is mishandled. These reactions underscore that product safety features intersect with legal, clinical and privacy concerns rather than being purely technical fixes.
What to watch
Observers will look for public details on the notification thresholds and the criteria that trigger alerts, measurable false-positive and false-negative rates, data retention and access policies for flagged events, audit logs for human reviewers, and uptake metrics for the opt-in feature. Independent evaluation or third-party audits and clearer documentation of reviewer training would address some concerns raised in coverage by Futurism and OECD.ai. Deployment studies or user-reported outcomes will be important to assess whether the feature meaningfully connects people to help without creating new privacy harms.
Scoring Rationale
The rollout affects product safety practice and privacy governance for conversational AI, making it notable for practitioners but not a frontier-model or regulatory landmark. The feature raises operational and ethical questions that engineers and policy teams will need to monitor.
Practice with real Health & Insurance data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Health & Insurance problems


