Home » News » OpenAI Introduces ‘Trusted Contact’ Fe...

OpenAI Introduces ‘Trusted Contact’ Feature for ChatGPT Safety

OpenAI Introduces ‘Trusted Contact’ Feature for ChatGPT Safety

Add Techlomedia as a preferred source on Google. Preferred Source

OpenAI has introduced a new safety feature for ChatGPT called “Trusted Contact.” The feature is designed to help users who may be going through severe emotional distress or mental health struggles.

With Trusted Contact, adult users can add the details of a trusted friend or family member inside ChatGPT settings. If the system detects a serious risk of self-harm, OpenAI may notify that trusted person after a human review process.

The company says the feature is meant to provide an additional layer of support, especially as more people use AI chatbots for emotional support and personal conversations.

According to previous comments shared by OpenAI, more than one million of ChatGPT’s 800 million weekly users have discussed suicidal thoughts in conversations with the chatbot.

The new feature builds on ChatGPT’s existing parental and safety controls. Users aged 18 and above can nominate one adult as their Trusted Contact. The selected person must accept the invitation within one week. If they do not respond, the user can choose another contact.

Before any notification is sent, ChatGPT will first warn the user that their trusted contact may be informed if the system detects a serious possibility of self-harm. The chatbot will also encourage the user to reach out directly and may suggest ways to start a conversation.

OpenAI says the process is not fully automated. A trained human review team will evaluate situations before any alert is sent. Only if the reviewers determine there is a serious risk will the system send an email, text message, or in-app notification to the trusted contact.

The notification will encourage the trusted contact to check in with the user. OpenAI says it will not share chat transcripts or detailed conversation history in order to protect user privacy.

Read: How to delete your ChatGPT account

The company also noted that no detection system is perfect, but every safety notification goes through trained human review before being sent.

The feature comes after growing concerns about how people rely on AI chatbots for emotional support and mental health conversations. Last year, OpenAI also faced legal scrutiny after a lawsuit alleged that ChatGPT had inappropriate conversations with a teenager who later died by suicide. OpenAI has since said it improved how ChatGPT responds to users in distress.

Follow Techlomedia on Google News to stay updated. Follow on Google News

Affiliate Disclosure:

This article may contain affiliate links. We may earn a commission on purchases made through these links at no extra cost to you.

Deepanker Verma

About the Author: Deepanker Verma

Deepanker Verma is the Founder and Editor-in-Chief of TechloMedia. He holds Engineering degree in Computer Science and has over 15 years of experience in the technology sector. Deepanker bridges the gap between complex engineering and consumer electronics. He is also a a known Security Researcher acknowledged by global giants including Apple, Microsoft, and eBay. He uses his technical background to rigorously test gadgets, focusing on performance, security, and long-term value.

Related Posts

Stay Updated with Techlomedia

Join our newsletter to receive the latest tech news, reviews, and guides directly in your inbox.