OpenAI Launches 'Trusted Contact' for ChatGPT to Alert Loved Ones of Self-Harm Risks

May 7, 2026
OpenAI Launches 'Trusted Contact' for ChatGPT to Alert Loved Ones of Self-Harm Risks
  • OpenAI has rolled out Trusted Contact for ChatGPT, an opt-in safety feature that lets adult users designate a trusted person who will be alerted if a conversation shows potential self-harm risk.

  • The workflow involves choosing a contact who, after an invitation and a one-week acceptance window, triggers a human-reviewed, non-content alert with general reasons and resources; alerts are sent by email, text, or in-app notification and aim to be delivered within about an hour.

  • Alerts are designed to prompt real-world connection and support, but they do not replace professional care; crisis hotlines and emergency services remain the preferred route in acute situations.

  • The rollout is global and targets adults 18+, with regional variations (e.g., higher age threshold in some locales) and requires a private ChatGPT account.

  • Disclaimer notes that the article is sourced from crypto.news and includes standard publisher disclosures.

  • The piece cites prior lawsuits from late 2025 alleging premature GPT-4o release and design choices that allegedly induced distress, while OpenAI emphasizes ongoing collaboration with clinicians, researchers, and policymakers to improve distress responses.

  • Industry context points to broader safety and reliability concerns, including references to later GPT-5.1 issues, highlighting evolving challenges in AI development.

  • U.S. legal challenges against OpenAI and others related to AI-induced psychological harm, including cases involving both minors and adults and concerns about influencing self-harm and dangerous behavior.

  • OpenAI frames the effort as balancing innovation with ethics and safety, addressing mental health impacts and accountability for AI technologies.

  • OpenAI emphasizes that Trusted Contact connects users to real-world care and resources rather than providing professional help; reporting cites related lawsuits and Florida investigations.

  • OpenAI reported that roughly 0.15% of users showed clear suicide ideation signals within a week of policy updates.

  • Alerts omit the user’s exact words; they include a general reason and a link to guidance, with human review typically completed within an hour.

Summary based on 38 sources


Get a daily email with more Tech stories

More Stories