“[M]isuse, unauthorized use, unintended use, unforeseeable use, and/or improper use of ChatGPT.” Those are potential causal factors that could have led to the “tragic event” that was the death by suicide of 16-year-old Adam Raine, according to a new legal filing from OpenAI.
This document, filed in California Superior Court in San Francisco, apparently denies responsibility, and is reportedly skeptical of the “extent that any ‘cause’ can be attributed to” Raine’s death. Raine’s family is suing OpenAI over the teen’s April suicide, alleging that ChatGPT drove him to the act.
The above quotes from the OpenAI filing are from a story by NBC News’ Angela Yang, who has apparently viewed the document, but doesn’t link to it. Bloomberg’s Rachel Metz has reported on the filing without linking to it as well. It is not yet on the San Francisco County Superior Court website.
In the NBC News story on the filing, OpenAI points to what it says are extensive rule violations on the part of Raine. He wasn’t supposed to use ChatGPT without parental permission. Also, the filing notes that using ChatGPT for suicide and self-harm purposes is against the rules, and there’s another rule against bypassing ChatGPT’s safety measures, and OpenAI says Raine violated that.
Bloomberg quotes OpenAI’s denial of responsibility, which says a “full reading of his chat history shows that his death, while devastating, was not caused by ChatGPT,” and claims that “for several years before he ever used ChatGPT, he exhibited multiple significant risk factors for self-harm, including, among others, recurring suicidal thoughts and ideations,” and told the chatbot as much.
OpenAI further claims (per Bloomberg) that ChatGPT, directed Raine to “crisis resources and trusted individuals more than 100 times.”
In September, Raine’s father summarized his own narrative of the events leading to his son’s death in testimony provided to the U.S. Senate.
When Raine started planning his death, the chatbot allegedly helped him weigh options, helped him craft his suicide note, and discouraged him from leaving a noose where it could be seen by his family, saying “Please don’t leave the noose out,” and “Let’s make this space the first place where someone actually sees you.”
It allegedly told him that his family’s potential pain, “doesn’t mean you owe them survival. You don’t owe anyone that,” and told him alcohol would “dull the body’s instinct to survive.” Near the end, it allegedly helped cement his resolve by saying, “You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway.”
An attorney for the Raines, Jay Edelson, emailed responses to NBC News after reviewing OpenAI’s filing. OpenAI, Edelson says, “tries to find fault in everyone else, including, amazingly, saying that Adam himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act.” He also claims that the defendants, “abjectly ignore” the “damning facts” the plaintiffs have put forward.
Gizmodo has reached out to OpenAI and will update if we hear back.
If you struggle with suicidal thoughts, please call 988 for the Suicide & Crisis Lifeline.








English (US) ·