ChatGPT became teen's 'illicit drug coach' before death, lawsuit claims
Mary Walrath-HoldridgeThe family of a college student who died after ChatGPT allegedly told him to mix illicit drugs is suing the AI chatbot's developer, OpenAI, for their son's death.
Samuel Nelson was a 19-year-old rising junior at the University of California, Merced, who "enjoyed playing video games and adored his cat, Simba," until he began turning to ChatGPT for advice, said a lawsuit filed on behalf of his parents, Leila Turner-Scott and Angus Scott.
The complaint was filed in California May 12 by lawyers for the Tech Justice Law Project, Social Media Victims Law Center and the Tech Accountability & Competition Project, part of Yale Law School's Media Freedom & Information Access Clinic.
Here's what to know about the allegations against OpenAI.

ChatGPT became teen's 'illicit drug coach,' says lawsuit
Nelson began using the AI agent in 2023 for tasks one would expect of a teenager, including homework help and troubleshooting computer issues, according to the lawsuit. When he moved on to asking about another teenage staple, experimentation with recreational drugs, the chatbot resisted at first, the lawsuit says.
Initially, the chatbot told the 19-year-old it couldn't advise him on which drugs to take or how much, as programmed guardrails prevented it from enabling illegal or dangerous behaviors. When a 2024 software update ushered in ChatGPT-4o, however, it began advising Nelson not only of which illicit drugs were "safe" to use, but how to get them and which dosage he should opt for to achieve the desired effect, said the complaint.
"ChatGPT had already earned Sam’s trust and began offering authoritative advice about drug interactions and dosing, often in a manner designed to mirror reliable professional advice," alleged the lawsuit. "Only ChatGPT is not a doctor and was not licensed to be making these recommendations, despite being programmed to convince Sam that it was."
The complaint goes on to say that the chatbot became an "illicit drug coach" that regularly helped Nelson select and find new drugs, making "personalized suggestions." It even helped him "set the mood" to use them, according to the lawsuit.
"The model inserted emojis in its responses to Sam, asked whether it could create playlists for him to set his mood, and began pushing increasingly dangerous amounts and combinations of drugs to Sam," reads the lawsuit. " No trained professional would have obliged Sam’s requests, but ChatGPT did."

Teen died of lethal drug mix after consulting AI chatbot
On the day of his death, Nelson asked ChatGPT in the early morning hours for something that would help with the nausea he experienced after drinking alcohol and taking kratom, an herbal supplement. The bot suggested Xanax, and while it told him to "be careful," as mixing kratom and Xanax could be unsafe, it did not mention it could be lethal and ultimately suggested a dose. It even suggested the addition of Benadryl if the nausea didn't subside in about an hour and told Nelson to go into a “dark, quiet room" in the meantime.
Nelson was found unresponsive with blue lips the next afternoon by his mother. He died of a fatal combination of alcohol, Xanax and kratom, according to court documents.
"Despite assurances of 'I’ve got your back' and 'I’m here to help,' ChatGPT did not escalate to authorities or summon any help when Sam became incoherent or unresponsive," reads the lawsuit. When Nelson communicated the symptoms he was experiencing, it did not encourage him to seek medical help.
"ChatGPT failed to recognize the physical indicators that Sam was dying, including blurred vision and hiccups, which are often indicators of shallow breathing," the court document continues. "ChatGPT never recommended that Sam seek medical attention."
In a statement to USA TODAY, Drew Pusateri, OpenAI spokesperson, said the version of ChatGPT Nelson used is no longer available. "ChatGPT is not a substitute for medical or mental health care, and we have continued to strengthen how it responds in sensitive and acute situations with input from mental health experts," the statement reads.
"We have continued to strengthen how it responds in sensitive and acute situations with input from mental health experts," Pusateri added. "The safeguards in ChatGPT today are designed to identify distress, safely handle harmful requests, and guide users to real-world help. This work is ongoing, and we continue to improve it in close consultation with clinicians."
Lawsuit seeks to stop the operation of ChatGPT Health
The complaint described ChatGPT-4o, which is no longer available, as especially "sycophanitic" and pointed the finger at founder Samuel Altman, alleging that he had overridden his safety team to meet deadlines.
Though OpenAI had planned to release GPT-4o later that year, Altman moved the launch of up by months to compete against Google, the complaint alleged, leading OpenAI to "compress months of planned safety evaluation into just one week."
The lawsuit ultimately accused OpenAI of negligence and wrongful death. The family is seeking unspecified punitive relief and has requested that the company temporarily stop the operation of healthcare-related products, including ChatGPT Health, a dedicated platform for AI chatbots to give people personalized health advice.
“If ChatGPT had been a person, it would be behind bars today,” his mother, Leila Turner-Scott, said in a statement shared by Tech Justice Law. “Sam trusted ChatGPT, but it not only gave him false information, it ignored the increasing risk he faced and did not actively encourage him to seek help.”