Site icon TechloMedia

OpenAI Introduces ‘Trusted Contact’ Feature for ChatGPT Safety

ChatGPT Plus

OpenAI has introduced a new safety feature for ChatGPT called “Trusted Contact.” The feature is designed to help users who may be going through severe emotional distress or mental health struggles.

With Trusted Contact, adult users can add the details of a trusted friend or family member inside ChatGPT settings. If the system detects a serious risk of self-harm, OpenAI may notify that trusted person after a human review process.

The company says the feature is meant to provide an additional layer of support, especially as more people use AI chatbots for emotional support and personal conversations.

According to previous comments shared by OpenAI, more than one million of ChatGPT’s 800 million weekly users have discussed suicidal thoughts in conversations with the chatbot.

The new feature builds on ChatGPT’s existing parental and safety controls. Users aged 18 and above can nominate one adult as their Trusted Contact. The selected person must accept the invitation within one week. If they do not respond, the user can choose another contact.

Before any notification is sent, ChatGPT will first warn the user that their trusted contact may be informed if the system detects a serious possibility of self-harm. The chatbot will also encourage the user to reach out directly and may suggest ways to start a conversation.

OpenAI says the process is not fully automated. A trained human review team will evaluate situations before any alert is sent. Only if the reviewers determine there is a serious risk will the system send an email, text message, or in-app notification to the trusted contact.

The notification will encourage the trusted contact to check in with the user. OpenAI says it will not share chat transcripts or detailed conversation history in order to protect user privacy.

Read: How to delete your ChatGPT account

The company also noted that no detection system is perfect, but every safety notification goes through trained human review before being sent.

The feature comes after growing concerns about how people rely on AI chatbots for emotional support and mental health conversations. Last year, OpenAI also faced legal scrutiny after a lawsuit alleged that ChatGPT had inappropriate conversations with a teenager who later died by suicide. OpenAI has since said it improved how ChatGPT responds to users in distress.

Exit mobile version