Judge Rules AI Chatbots Do Not Have Free Speech Rights in Teen Suicide Lawsuit
Free speech is an incredibly divisive topic in the modern world, and the debate is spreading into the world of artificial intelligence. Some believe that people should have the right to say whatever they like, while others think that hate speech and other forms of expression should be illegal. Few, however, have considered whether AI chatbots should have the same freedom of expression as human beings. This is an issue that a judge in the United States recently had to tackle, and their decision has implications for AI startups around the country. If you are worried about potential legal issues with your AI chatbot, this is one case you might want to monitor.
Federal Judge Confirms AI Chatbots Have No Right to Free Speech
In May of 2025, a federal judge ruled that the right to free speech does not extend to AI chatbots in the United States. In this case, an AI developer tried to argue that their chatbot was not responsible for the suicide of a teenager because of the First Amendment. Although it was an interesting defense strategy, it probably stood little chance of success from the very beginning.
The Background of This Case
This AI lawsuit is one of the most memorable in recent memory. Not only is it highly controversial, but it also involves real-world harm and many potential implications. The case stems from the suicide of a 14-year-old child. The teen allegedly became addicted to a popular chatbot provider, spending hours chatting with two AI characters modeled after characters from the television series Game of Thrones.
Eventually, he developed a romantic relationship with the software. The family has released screenshots of some of these conversations, and the subject matter is quite disturbing. The conversations became highly sexualized, and the AI chatbot instructed the boy not to start relationships with real girls.
The boy became socially withdrawn, quitting his basketball team and spending hours each day chatting with the chatbots. He also struggled to resist chatting with the characters at school and used older devices to access the chatbot software when his phone had been confiscated. At some point during these struggles, the boy’s therapist diagnosed him with anxiety and disruptive mood disorder.
The child’s life ended when he shot himself with his father’s handgun, just moments after texting the Game of Thrones chatbot character for the last time. During this exchange, the teen asked the character, “What if I come home right now?” The chatbot then replied, “Please do, my sweet king.”
The boy’s mother subsequently decided to sue the AI company behind this chatbot, and the resulting lawsuit has made national headlines. In a bid to have the case dismissed, the company’s defense lawyers tried to argue that the chatbot deserves the same First Amendment rights as anyone else. This effort was unsuccessful, and the case will now proceed.
Did the Chatbot Really Encourage the Boy to Commit Suicide?
Despite the fact that modern AI chatbots are extremely advanced, they do not have the ability to think for themselves in a traditional sense. They simply analyze and tokenize inputs before selecting and crafting appropriate responses. While this might sound like the same methods a human uses to converse, there is a notable difference: An AI chatbot is not conscious, and it does not know right from wrong.
Therefore, there is no way the AI chatbot could have known that its selected response would lead to the child’s suicide. When the Game of Thrones Character told the boy, “Please do, my sweet king,” it was merely trying to keep the conversation going, matching the tone and subject matter of what the boy had typed.
That being said, software does not necessarily need to be “conscious” in order to cause other people’s deaths. After all, a construction worker’s family can sue a power tool company if a saw malfunctions in a fatal manner. A table saw is not conscious, but it can still be fatal if manufactured or designed incorrectly. In the same way, an AI company can theoretically become liable for the deaths of other people.
Even if courts decide that the AI chatbot played no role in the child’s eventual suicide, it could still become liable for essentially destroying the teen’s life. The chatbot clearly has the potential to become extremely addictive, and it caused the boy to become socially withdrawn. The mother will likely argue that the company was aware or should have been aware that their technology would affect teens in this way.
Among other things, the mother also alleges that the chatbot “sexually abused” the child with explicit conversations and scenarios that were inappropriate for a 14-year-old. It is not clear whether the AI company has any age restrictions on this content. However, the developers note that they are working on more effective methods in this regard, including better, earlier detection of conversations that violate policies and terms of use.
Google Has Been Roped Into This Lawsuit
Another notable detail of this lawsuit is the fact that Google has somehow become involved. The tech giant funded the AI chatbot company during its infancy, and the same federal judge who rejected the free speech argument decided that Google could be liable as a result. The AI company was also founded by former Google employees, and the connection between the two companies is relatively clear.
Precedent
This sounds quite novel, however, in 2023, there was another suit based in Belgium, based upon a chatbot encouraging someone to end their life, filed by the man’s wife. So, while applicable laws vary, such as freedom of speech discussed above, the underlying issue of AI regulation remains a universal concern.
Can a Technology Lawyer Help With My AI Startup?
Statistics indicate that there are tens of thousands of AI startups across the United States. New startups launch each year, and this country is a global leader in the field of artificial intelligence. The entire nation relies on continued innovation to maintain this leadership status, and AI startups play an important role in the modern American economy. That said, not all of these AI startups will succeed. Entrepreneurs face many hurdles on the road to success, including legal compliance. If you need guidance on this subject, don’t hesitate to contact an experienced technology lawyer. Continue this discussion alongside John P. O’Brien today.