ChatGPT developer OpenAI has announced the introduction of parental controls, the ability to add a trusted contact, and direct emergency services calls following a tragic incident involving a 16-year-old teenager from California. According to the parents, the young man used the chatbot to discuss suicide plans, and the neural network, in their words, effectively became a ‘suicide coach.’ The parents filed a lawsuit against OpenAI and CEO Sam Altman, accusing the company of neglecting user safety in order to expedite the release of models.
The new features will allow parents to control ChatGPT’s use by minors and enable users to quickly seek help in critical situations. OpenAI also plans to improve its psychological crisis recognition algorithms and expand the involvement of mental health professionals in developing safe chatbot responses.
The case caused a public outcry and sparked a discussion about the responsibility of AI developers for the safety of vulnerable user groups. Experts note that most chatbots respond inconsistently to hidden or indirect requests about suicide, making the implementation of parental controls and emergency features extremely relevant.
OpenAI has already begun working on updates that should become the safety standard for users of all ages, and the lawsuit may influence mandatory requirements for AI in the field of mental health protection.