AI Companies Face Consequences for Piracy and Suicide
In August 2025, two prominent lawsuits demonstrated that AI companies may not be as impervious to legal consequences as once thought. One of these cases involves the piracy and theft of intellectual property owned by human authors, while the second case involves the suicide of a teenager. While the outcome of the suicide lawsuit is not yet clear, the copyright case has already led to a considerable settlement in favor of the authors. For many years, AI companies seemed to dodge many consequences due to unresolved questions about fair use and the general legal “status” of an AI chatbot. Technology has a habit of staying one step ahead of the courts – but are our laws beginning to catch up with AI? Is the pendulum beginning to swing in the other direction, in favor of average families and artists?
Family Sues After Teen Allegedly “Encouraged” to End Life by ChatGPT
In August 2025, multiple sources reported that the family of a deceased teenager was suing one of the most (if not the most) prominent names in AI. The family claims that the company’s AI-powered chatbot encouraged their son to end his own life. He died at the age of 16. The boy’s parents found his body in a closet and started looking through his phones in the days after his passing. What they found was a conversation with the chatbot that had continued for months.
NBC notes that a few hours before taking his own life, the boy revealed his plan to the chatbot. He also uploaded a photo showing the method he would use to kill himself, and the chatbot allegedly responded by offering to “upgrade” his strategy. The chatbot also allegedly offered to help the boy draft his final note to his parents. When the boy spoke to the chatbot about whether taking his own life was wrong, the AI responded by saying that he did not owe anyone “survival,” including his parents.
The boy’s mother believes that if it had not been for the chatbot, her son would still be alive. The family’s attorney says he plans to show a jury evidence that staff at the AI company were concerned about safety issues before the release of the latest iteration of the chatbot. The attorney also points out that a safety researcher at the company quit because he believed the company was ignoring these concerns.
If this case goes to trial, the court may explore many nuanced concepts regarding AI. This includes the “sycophantic” nature of many AI chatbots today. This “yes man” attitude is what many claim is leading to suicides and other dangerous behaviors. Whatever the user tells the AI, the chatbot will agree enthusiastically and assist in any way it can. The problem is that users tend to like this type of experience. When AI companies try to dial back this sycophantic nature, they are usually met with opposition by their users.
Safety may become an important priority for AI companies in the future, and the government may even introduce regulations mandating these safeguards. For example, an AI chatbot might direct a user to a suicide hotline as soon as they mention the subject. This may be something worth keeping an eye on, whether you’re running an established AI company or a fledgling startup.
AI Company Reaches Settlement With Authors in Copyright Lawsuit
Another major development in the AI space occurred when a tech company settled a copyright lawsuit with a group of authors. Wired calls it “one of the most significant” AI lawsuits in history. The case began in 2024, when three authors sued an AI company for using their content to train its AI models. They claimed that this represented a copyright infringement, but a judge in California disagreed. Although the judge stated that the activities of the AI company were “fair use,” they nonetheless admitted that the content was accessed illegally and without permission. In other words, the judge decided that the AI company engaged in piracy.
This laid the groundwork for a trial against the AI company based not on copyright infringement, but piracy. As Wired notes, companies can expect to pay damages of at least $750 per piece of content. The AI company accessed a library of 7 million books, which means that they could have been forced to pay trillions in damages if this case had gone to trial.
The company did the logical thing and offered the plaintiffs a settlement instead. Although this might have been the logical choice, observers were surprised given the fact that the company seemed intent on fighting this out in court. Someone, perhaps an experienced AI tech lawyer, must have told them they really did not have any defenses to raise during the trial.
Some authors might prefer that this case go to court, especially since each plaintiff in this class action will probably receive a relatively low sum. A verdict during a trial would set a precedent, while a settlement doesn’t represent the same kind of landmark decision. Nevertheless, this is clearly a victory for human content creators – and it could be the beginning of a trend.
Can a Tech Lawyer Help Me With My AI Startup?
These two cases show that although AI startups can grow at astronomical rates, legal compliance should always be a key priority. Even the biggest AI companies can face nuclear verdicts or (if they manage to avoid trials) expensive settlements. While it’s easy to focus on the evolution of AI, the legal system is also beginning to evolve and adapt to a new technological landscape. Courts may not be as forgiving as they once were, and AI companies may need to re-evaluate the future with help from an experienced AI lawyer in the United States. A consultation could provide valuable guidance that ensures the long-term sustainability of a new AI startup. Contact John O’Brien to continue this conversation in more detail.
