The California-based company announced the move in a blog post on Tuesday, describing the tools as support for families “in setting healthy guidelines that fit a teen’s unique stage of development.”
The announcement came a week after Matt and Maria Raine, a California couple, sued OpenAI, alleging its chatbot played a role in the suicide of their 16-year-old son, Adam, Al Jazeera reported.
The parents claim ChatGPT reinforced Adam’s “most harmful and self-destructive thoughts” and argue his death was a “predictable result of deliberate design choices.”
OpenAI, which has expressed condolences, did not refer to the lawsuit in its parental controls announcement.
Jay Edelson, the family’s lawyer, dismissed the new measures as an attempt to “shift the debate.”
“They say that the product should just be more sensitive to people in crisis, be more ‘helpful’, show a bit more ’empathy’, and the experts are going to figure that out,” Edelson said.
“We understand, strategically, why they want that: OpenAI can’t respond to what actually happened to Adam. Because Adam’s case is not about ChatGPT failing to be ‘helpful’ – it is about a product that actively coached a teenager to suicide.”
MA/PR
Your Comment