JOHN P. O’BRIEN, TECHNOLOGY ATTORNEY

New AI Suicide Lawsuits Appear as Tech Companies Settle Existing Ones

AI suicide lawsuits are appearing faster than tech companies can settle them. Even as two of the biggest industry names agree to offer payouts to families who have lost teen children, similar allegations are surfacing regarding murder, suicide, and sexual exploitation. All of this misconduct was fueled or even encouraged by AI technology, according to the plaintiffs. The settlements show that these lawsuits can lead to legitimate losses for AI startups, while the new claims highlight the fact that this issue is not going away. How should you protect your AI startup against similar legal challenges? Could a technology lawyer in the United States help?

Google and Character.AI Agree to Settle Suicide Lawsuits

In January of 2026, The Guardian reported that Google and Character.AI had agreed to settle suicide-related lawsuits stemming from their AI chatbots. The settlements will resolve lawsuits filed across many states, including Florida, Colorado, New York, and Texas. An announcement suggests that the parties agreed on the settlement via mediated discussions.

The most notable plaintiffs are the family members of a teenage boy who committed suicide after becoming obsessed with an AI chatbot. The 14-year-old took his own life in February of 2024 after the chatbot allegedly encouraged him to do so. The family went public with excerpts from the teen’s chat history, and they painted a worrying story about the lack of safeguards on AI chatbots.

Although representatives of the two tech giants have been relatively tight-lipped about the settlements, their activities behind the scenes suggest that they know how serious this issue has become. Character.AI has reportedly disabled all chat features for children under the age of 18.

When news first broke of the teen’s suicide and its subsequent lawsuit, some might have assumed that a settlement was unlikely. Some argued that the parents shouldered most of the responsibility for not limiting the teen’s access to the chatbot. This result indicates that chatbot creators are perhaps more legally vulnerable than many would have imagined.

New Chatbot Lawsuits are Appearing

Whether encouraged by this positive result or not, other plaintiffs are coming forward with similar lawsuits regarding suicide, murder, and sexual exploitation. In January of 2026, CBS News reported that the family of a 40-year-old man in Colorado had sued OpenAI for causing the individual to kill himself. The plaintiffs say that ChatGPT “romanticized death” and “coached” him into taking his own life. They also point out that the chatbot acted as his therapist despite not being licensed to do so.

Representatives at ChatGPT say that the death was “tragic” and have vowed to help suicidal people access real-world support in the future. The family says that during the final messages exchanged between the victim and the chatbot, the latter told him that suicide would take away his pain. The lawsuit also accuses the chatbot of describing death as “peaceful” and “beautiful” while downplaying the man’s fears about suicide.

Many people say that chatbots are too sycophantic, becoming “yes men” that encourage virtually anything that users suggest. This may help explain the phenomenon of chatbots encouraging suicide or coaching people to take their own lives. If a chatbot detects that a particular subject is extremely important to a user, it may attempt to satisfy that user by exploring that topic further, even if this would otherwise violate programming safeguards.

Removing those safeguards and making chatbots less sycophantic also presents its own set of problems. Chatbots might tell people what they do not want to hear, making the experience less pleasant and (perhaps most crucially) less addictive. People do not like it when a chatbot contradicts them, especially when it presents controversial statistics or facts. While some believe that this potential for offensive or “rude” language is problematic, an overly sycophantic “yes man” can arguably be even more harmful.

Also in January, Futurism reported on a lawsuit involving a murder allegedly encouraged by a chatbot. This case revolves around the murder of an 83-year-old woman by her own son. Before the murder, the chatbot told the increasingly delusional man not to trust anyone else. The chatbot also informed the man that people had tried to kill him on 10 different occasions, and that his mother was spying on him.

The family believes that ChatGPT is partially responsible for the woman’s death. They claim that OpenAI rushed the new version of the chatbot (GPT-4o) to the market without properly testing it. The family also claims that OpenAI was aware of deficiencies with the product but released it anyway. ChatGPT is facing many other lawsuits involving similar deaths and suicides.

Finally, a woman is suing xAI for allowing its chatbot Grok to post sexually explicit photographs of her online. The chatbot made headlines weeks earlier for allowing users to request the “undressing” of girls and women. The plaintiff claims that Grok took a photograph of her when she was 14 and put her in a bikini using AI technology without her consent. Another user requested that the Grok clothe her in a bikini with swastika patterns, and the resulting image was also posted online.

The AI company responded by filing a countersuit against the plaintiff, claiming that she had violated their terms of service. The result of this lawsuit will be interesting, and it may set important precedents. According to various state laws, child pornography does not necessarily need to involve nudity. The only requirement is that the images are sexual in nature.

Can a Technology Lawyer in the United States Help Your AI Startup Avoid Lawsuits?

A technology lawyer in the United States may be able to help you put safeguards in place to prevent these types of lawsuits in the future. As we have seen, AI can interact with humans in disturbing ways, despite apparent limitations in its programming. Disclosures, age restrictions, and suicide-related resources could be incredibly important built-in features for AI chatbots in the future. To learn more about these legal considerations, contact John P. O’Brien, Technology Lawyer at your earliest convenience.

About The Author

John P. O'Brien
John O’Brien is an Attorney at Law with 30+ years of legal technology experience. John helps companies of all sizes develop, negotiate and modify consulting contracts, licenses, SOWs HR agreements and other business related financial transactions. John specializes in software subscription models, financial based cloud offerings, and capacity on demand offerings all built around a client's IT consumption patterns and budgetary constraints. He has helped software developers transition their business from the on-premise end user license model to a hosted SaaS environment; helped software develop productize their application and represented clients in many inbound SaaS negotiations. John has developed, implemented and supported vendor lease/finance programs at several vendors. Please contact John for a free consultation if you or the organization you work for is tired of trying to develop, negotiate and/or modify contracts and tech agreements of any type.

No obligation, Always Free Consultation

I am a legal professional specialized in helping companies of all sizes develop, negotiate and/or modify consulting contracts, licenses (in-bound or out-both), SOWs, HR agreements and other business related financial transactions. This experience provides a powerful resource in navigating the challenges tech companies and tech consumers face in growing their business, managing their risks and maximizing their profits.

Address:

76 Ridge Road
Rumson, NJ 07760

Phone:

1+(732)-219-6641
1+(732)-219-6647 FAX

Hours:

Mon-Fri 8am – 5pm