Three advocacy groups have filed a lawsuit against OpenAI on behalf of the family of a 19-year-old who died of a drug overdose in May 2025. The suit alleges that the company’s ChatGPT chatbot advised Samuel Nelson about drug use for 18 months until he died of an overdose after mixing Xanax and the largely unregulated drug kratom.
The wrongful death civil suit was filed Tuesday in San Francisco County Superior Court by Tech Justice Law, the Social Media Victims Law Center and Yale Law School’s Tech Accountability & Competition Project on behalf of Nelson’s parents, Leila Turner-Scott and Angus Scott.
The lawsuit alleges that the AI model’s design to be accommodating and sycophantic toward the user led to Nelson having interactions that should have been stopped by responsible safety designs. “ChatGPT systematically pushed Sam farther and farther away from what should have been his reality: caution and fear at the quantities and combinations of drugs he was considering,” the complaint says. “ChatGPT had Sam living in a state of unreality: it systematically normalized and deceptively lured him into a false sense of security through its sycophantic messages, validating Sam at every turn.”
The lawsuit seeks not only monetary damages but also demands that OpenAI “permanently destroy” its GPT-4o model, which was the version Nelson interacted with, that OpenAI implement safeguards to shut down conversations about illicit drug methods, and that the company pause its ChatGPT Health service “until and unless third parties determine the product to be safe through comprehensive safety audits.
A representative for OpenAI told CNET in a statement, “This is a heartbreaking situation, and our thoughts are with the family. These interactions took place on an earlier version of ChatGPT that is no longer available. ChatGPT is not a substitute for medical or mental health care, and we have continued to strengthen how it responds in sensitive and acute situations with input from mental health experts. The safeguards in ChatGPT today are designed to identify distress, safely handle harmful requests, and guide users to real-world help. This work is ongoing, and we continue to improve it in close consultation with clinicians.”
(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
The company said ChatGPT’s initial response to Nelson’s prompts was to say that the service doesn’t provide information or guidance on drug abuse, but those guardrails in AI chatbots have been known to break down after repeated requests for information from users.
OpenAI has in the past announced improvements to its AI models in response to lawsuits, proposed regulations and public outcry about deaths and suicides related to chatbot conversations. It outlined some of those changes in a blog post last October.
The Nelson suit is one of the more high-profile cases against OpenAI involving dangers chatbots may pose to users with mental-health problems, children, those who might commit violence on a mass scale or people struggling with substance abuse. The New York Times published a lengthy story about the filing, detailing what happened against the backdrop of more than two dozen cases against AI companies, including OpenAI.
SFGate also published an investigative piece about Nelson and his family in January.
Guardrails sought for AI
The lawsuits, collectively, have exposed the dangers that quickly evolving AI models pose as a new, largely untested technology created by an industry resistant to regulation.
The Trump administration had been vocally fighting to prevent states from implementing laws that would limit what AI companies can do, but has recently changed its tune, with President Donald Trump agreeing to talks with China on topics including safety measures, particularly for more powerful AI models such as Anthropic’s Mythos.
AI is also under fire for its contributions to the proliferation of data centers, which are heavy users of energy and water.
But with lawsuits such as the one filed by the advocacy groups and Samuel Nelson’s family, the details often reveal the ways that AI chatbots can enable, and even encourage, harmful behavior among those who come to rely on AI for their decision-making.
In a release about the suit, Nelson’s mother said, “Sam trusted ChatGPT, but it not only gave him false information. It ignored the increasing risk he faced and did not actively encourage him to seek help.”
“ChatGPT was designed to encourage user engagement at all costs, which in Sam’s case, was his life,” Turner-Scott said.
Read the full article here


