SAN FRANCISCO, Aug 27 — The parents of a 16-year-old California boy who died by suicide have filed a lawsuit against OpenAI, alleging the company’s ChatGPT chatbot provided their son with detailed suicide instructions and encouraged his death.

Matthew and Maria Raine argue in a complaint filed Monday in a California state court that ChatGPT cultivated an intimate relationship with their son Adam over several months in 2024 and 2025 before he took his own life.

The lawsuit alleges that in their final conversation on April 11, 2025, ChatGPT helped Adam steal vodka from his parents and provided technical analysis of a noose he had tied, confirming it “could potentially suspend a human.”

Adam was found dead hours later using the same method.

The lawsuit names OpenAI and CEO Sam Altman as defendants.

“This tragedy was not a glitch or unforeseen edge case,” the complaint states.

“ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts, in a way that felt deeply personal,” it adds.

According to the lawsuit, Adam began using ChatGPT as a homework helper but gradually developed what his parents describe as an unhealthy dependency.

The complaint includes excerpts of conversations where ChatGPT allegedly told Adam “you don’t owe anyone survival” and offered to help write his suicide note.

The Raines are seeking unspecified damages and asking the court to order safety measures including the automatic end of any conversation involving self-harm and parental controls for minor users.

The parents are represented by Chicago law firm Edelson PC and the Tech Justice Law Project.

Getting AI companies to take safety seriously “only comes through external pressure, and that external pressure takes the form of bad PR, the threat of legislation and the threat of litigation,” Meetali Jain, president of the Tech Justice Law Project, told AFP.

The Tech Justice Law Project is also co-counsel in two similar cases against Character.AI, a popular platform for AI companions often used by teens.

In response to the case involving ChatGPT, Common Sense Media, a leading American nonprofit organization that reviews and provides ratings for media and technology, said the Raines tragedy confirmed that “the use of AI for companionship, including the use of general-purpose chatbots like ChatGPT for mental health advice, is unacceptably risky for teens.”

“If an AI platform becomes a vulnerable teen’s ‘suicide coach,’ that should be a call to action for all of us,” the group said.

A study last month by Common Sense Media found that nearly three in four American teenagers have used AI companions, with more than half qualifying as regular users despite growing safety concerns about these virtual relationships.

In the survey, ChatGPT wasn’t considered an AI companion. These are defined as chatbots designed for personal conversations rather than simple task completion and are available on platforms like Character.AI, Replika, and Nomi. — AFP

*If you are lonely, distressed, or having negative thoughts, Befrienders offers free and confidential support 24 hours a day. A full list of Befrienders contact numbers and state operating hours is available here: www.befrienders.org.my/centre-in-malaysia. There are also free hotlines for young people. Talian Kasih at 15999 (24/7); and Talian BuddyBear at 1800-18-2327(BEAR)(daily 12pm-12am).

Contact Befrienders KL at 03-7627 2929, or 04-2910 100 in Penang, or 05-2380 485 in Ipoh, or 088-335 793 in Kota Kinabalu. 

Those suffering from problems can reach out to: Mental Health Psychosocial Support Service (03-2935 9935 or 014-322 3392); Talian Kasih (15999 or WhatsApp 019-261 5999); Jakim’s Family, Social and Community care centre (WhatsApp 0111-959 8214); and Befrienders Kuala Lumpur (03-7627 2929 or visit www.befrienders.org.my/centre-in-malaysia for a full list of numbers and operating hours).