JOHN P. O’BRIEN, TECHNOLOGY ATTORNEY

OpenAI Sued After ChatGPT “Makes Up Lies”

Within a very short period of time, ChatGPT has completely changed the world of online content. Freelance writers suddenly find themselves being replaced by artificial intelligence, and the software seems to be efficient enough to mimic humans almost perfectly. But those who rely on ChatGPT too heavily may encounter legal issues in the future. On more than one occasion, ChatGPT has exhibited a tendency to completely fabricate events, facts, and even legal cases. Only a few weeks ago, a lawyer made the mistake of using ChatGPT to carry out research for a case. This was a personal Injury case filed in NY federal Court citing fabricated case law. It was only when the presiding judge reviewed this research that questions were raised. Eventually, the judge concluded that ChatGPT had “invented” past cases out of thin air, and the lawyer was referring to precedents that did not actually exist.

It only takes one mistake to completely ruin a career, bankrupt a business, or cause a financially-crippling lawsuit. This is something that OpenAI – the company behind ChatGPT – is quickly discovering. They are now facing a defamation lawsuit from a radio host, and this plaintiff claims that the AI accused him of crimes that he did not actually commit. Of course, ChatGPT is not a sentient being. In the end, it is a tool for humans. And when humans use tools in a negligent fashion, they can be held accountable.

For technology or software-related legal issues, consider getting in touch with John O’Brien. With specific experience in tech law, John P. O’Brien can guide you toward a positive outcome while providing targeted, personalized legal advice.

ChatGPT Accuses Radio Host of Financial Crimes With Zero Evidence

On June 9, Rolling Stone reported that OpenAI was being sued for defamation. The lawsuit stems from an article written by a journalist who was trying to summarize a Second Amendment lawsuit. Instead of summarizing the case, ChatGPT simply invented a completely different story.

In the process, the AI claimed that the lawsuit involved a radio host named Mark Walters being sued by the Second Amendment Foundation for fraud and embezzlement. The AI-generated article also stated that Walters is the treasurer and CFO of the organization despite him never holding either post. ChatGPT even managed to generate the full text of a completely fictional lawsuit, complete with its own case number. Try to look up this case, and you will soon find that it does not even exist.

The problem is that the average person does not know how to look up a case number or confirm whether a lawsuit is real. But this was never actually an issue since the article was never published. The journalist who generated the article is an apparent supporter of the Second Amendment, the SAF, and Mark Walters. The journalist even contacted the SAF to notify them of the issue and never shared any of the information with Walters. Still, the SAF has decided to move forward with an official lawsuit against OpenAI as a result of this debacle.

Despite numerous accounts of ChatGPT inventing facts in the past, this would be the first defamation lawsuit of its kind against OpenAI. A few months prior, an Australian politician discovered that a ChatGPT-generated article had accused him of being convicted of bribery. Although these claims were blatantly false, the politician only sent a warning in writing and decided not to sue. One has to wonder whether this defamation lawsuit has legs to stand on. After all, the defamatory content was never actually published by any media organization.

In defamation lawsuits, plaintiffs must establish four things:

  • The defamatory statements were false
  • The defamatory statements were published to a third party
  • The statements were negligent in nature
  • The plaintiff suffered real damages

One might argue that since the article was never published, the requirements of defamation are not met. However, you might say that as soon as the article was generated, it was “published” on the internet and viewable by various parties. It is also worth noting that the journalist sent the content to SAF. But SAF is the plaintiff in this lawsuit and not necessarily a third party. In addition, SAF suffered no real monetary damages as a result of this AI-generated article, especially since it was never published in the traditional sense.

Could This Set a Precedent?

Despite the outcome of this lawsuit, it draws attention to a very real issue with ChatGPT. Various observers have commented that the software cannot be simply “misreading” or “misinterpreting” facts that are already posted on the internet. The general understanding is that ChatGPT simply scans the web for relevant content and re-words it. The issue here is that if someone else writes something inaccurate, ChatGPT will parrot it without hesitation.

But the existence of completely fabricated ideas and concepts shows that ChatGPT is actually going much further than simply rewording existing content. Instead, it is generating false case numbers and the text of entirely fabricated lawsuits. So, what exactly is going on here? If ChatGPT is not simply searching the web for content to copy, then what is it doing?

What happens when ChatGPT generates a fake article that really does damage someone’s reputation or livelihood? What if the fake elements of the story are difficult to spot? What if everyone believes the story, and the consequences occur before anyone can react? Would OpenAI be liable for these damages, or would the journalist who generated the article shoulder the liability? Perhaps both? In the end, these kinds of questions are best left answered by lawyers who have experience with software law.

How Can this most Unfortunate Scenario be Avoided?

We must remember Large Language AI tools like Chat AI reviews massive amounts of data, recognizes patterns and can actually anticipate based upon those facts, much like a human though process. So when LLM tools fabricate false information, its not totally fabricated, it false data that looks like what the Large Lanague Model tool expected might be? Like a good lie, its based in part on truth which makes it very difficult to spot and weed out. What emerging AI regulations are trying to do are to help deal with this phenomenon. The EU AI Act is often viewed as being the most robust and well conceive AI regulation presently under consideration. What the EU AI Act does is to look at the AI function and determine the level of regulation required based upon a Risk Based Model, so if it is viewed as High Risk (like possible physical harm or personal safety) more rigorous process and analysis is required ( things like bias analysis and an Impact Assessment)  if its low risk very little is required under the regulation except perhaps such as Notice about the use of AI and transparency regarding the operation of the AI Model. The EU AI Act promotes the 5 principates embodied in the acronym HASTE

H- Humancentered (results should be reviewed and approved by humans)

A – Accountable, The AI used must remain, Accountable for the AI results,

S – Safe and Secure, AI should not e relied upon as the last step in ensuring personal safety and health of humans,

T – Transparent and Explainable, the AI Models used should be transparent and explainable, with regard to were the suggested results achieved?

E- Ethical, the results produced by the AI process should be Ethical and Fair (so the fabricated Legal case referenced above would fail this prong of the test)

AI is rapidly growing and like all transformative events, it takes the law time to react, in this situation there is an enormous amount of work being done on AI, and often other areas like Privacy regulation, copyright law, etc., they all factor into that AI regulatory current thinking. It is important to remain aware of these evolving legal principles to avoid unexpected consequences in the future.

Where Can I Find a Qualified Tech Lawyer?

If you have been searching for a technology lawyer, look no further than John O’Brien. Over the years, John P. O’Brien has assisted numerous individuals and companies with a wide range of technology and software-related legal issues. When you’re dealing with complex concepts like AI, software copyrights, and SaaS, it helps to work with a lawyer who stays up-to-date with the latest developments. Book your consultation today to get started with an effective action plan.

About The Author

John P. O'Brien
John O’Brien is an Attorney at Law with 30+ years of legal technology experience. John helps companies of all sizes develop, negotiate and modify consulting contracts, licenses, SOWs HR agreements and other business related financial transactions. John specializes in software subscription models, financial based cloud offerings, and capacity on demand offerings all built around a client's IT consumption patterns and budgetary constraints. He has helped software developers transition their business from the on-premise end user license model to a hosted SaaS environment; helped software develop productize their application and represented clients in many inbound SaaS negotiations. John has developed, implemented and supported vendor lease/finance programs at several vendors. Please contact John for a free consultation if you or the organization you work for is tired of trying to develop, negotiate and/or modify contracts and tech agreements of any type.

No obligation, Always Free Consultation

I am a legal professional specialized in helping companies of all sizes develop, negotiate and/or modify consulting contracts, licenses (in-bound or out-both), SOWs, HR agreements and other business related financial transactions. This experience provides a powerful resource in navigating the challenges tech companies and tech consumers face in growing their business, managing their risks and maximizing their profits.

Address:

76 Ridge Road
Rumson, NJ 07760

Phone:

1+(732)-219-6641
1+(732)-219-6647 FAX

Hours:

Mon-Fri 8am – 5pm