The Legal Dangers of Overselling the Capabilities of AI
Enthusiasm for artificial intelligence has tapered, replaced by growing skepticism about the overall profitability of this technology; and unexpected challenges inherent with the new technology. Predictably, observers are already comparing the moment in time to the dot-com crash. Although it is too early to tell whether these predictions will come true, there are similarities that are plain to see for anyone looking strictly at the numbers. Regardless of what happens with AI, entrepreneurs should be wary of the dangers of overselling and overpromising in this new era of healthy skepticism. Growing developments in this field show that under certain circumstances, this mistake can easily lead to lawsuits. This may be something worth discussing with a technology lawyer.
Regulatory Scrutiny Forces Big Tech Names to Revise AI Messaging
In August 2025, some of the biggest names in technology were forced to tone down their AI messaging in the face of growing regulatory scrutiny. The regulatory body in question was the National Advertising Division, which is part of the Better Business Bureau (BBB). These regulatory bodies issued a strong warning to companies like Apple, Google, and Microsoft about overpromising with their AI products. In particular, companies were urged to rethink vague phrases like “automation” and replace them with more specific descriptions of what AI can actually provide.
The Federal Trade Commission (FTC) has issued similar warnings recently. Instead of warning against vague marketing terms, however, the FTC specifically mentioned the dangers of outright lies. The FTC also noted that these mistruths could potentially lead to lawsuits and fines, some of which may be filed by government organizations against AI companies. Some firms predict that new regulations could severely affect the way these AI companies market their products in the future. Specifically, AI companies may be required to provide evidence with any marketing claims they make, thereby ending the “hype” of unproven tech.
AI presents many new issues, like the extent of “fair use” of a copyright holder’s data for training use in AI data sets; privacy rights that remain even when certain personal data may be freely gathered on the web. So older issues like Copyright and Personal Privacy may be viewed differently within an AI context. Issues like hallucinations where the AI in its desire to please projects a likely result, based upon historical data, but presents it as fact; like legal case law that never happened.
One is reminded of the marketing restrictions that apply to the financial market. Overpromising is strictly prohibited for any investment products. If a financial organization does wish to “brag” about its returns, it must do so in a very specific way, complete with verifiable, “proven” data, and always with the caveat that past returns do not equate to future gains. The AI industry could see similar restrictions and requirements for marketing campaigns in the future. One important distinction in AI marketing lingo could be the difference between “fully autonomous” and “AI-assisted.”
The hype of AI has allowed many companies to capitalize, but this hype could also be incredibly dangerous if it becomes clear that the alleged transformation taking place is completely fabricated or at least presented without notice about the limitation and risks associated with AI. It is all too easy to use buzzwords and other marketing strategies to claim that a company is on the cutting edge of AI development. But if there is nothing going on below the surface, the bubble could burst.
These concerns are heightened when one considers the costs involved with AI testing and training. As one report notes, a Swiss investment firm now spends over $6,000 to run a single model back test using ChatGPT. To fully test a model, the firm must carry out hundreds of these tests. Without testing, the models are not reliable. The costs involved not only incentivize cutting corners but also raise serious questions about the overall profitability of AI.
Is AI Really Leading to Higher Profits?
As Futurism notes, a new MIT study found that when businesses attempt to integrate AI into their operations, they fail to do so 95% of the time. People are discovering that AI is not a magic, instant money-maker. The study also found that AI products can only complete about 30% of the tasks assigned to them. The lesser-known AI products perform significantly worse. Yet, AI is getting better at a fast pace, technology, the law and people’s awareness are changing rapidly. You need to balance the risk of being late to market against the risk of over-hype early in the cycle. AI is definitely real, and improving quickly but the challenges are not going away.
Remember, most observers expected AI to add trillions to the global economy within a few short years. People are starting to wonder what will happen if AI fails to live up to that promise, especially since so many have already invested so much into this technology. When one compares the rapid investment into AI with the actual end results, it is easy to see why so many people are comparing this moment in time to the dot-com bubble.
How Much Does it Cost to Generate a Single AI Answer?
When someone asks ChatGPT a question, the company must pay for about three watt-hours of energy. It must also divert a significant amount of water to the servers for cooling. Simple questions may consume less power and water, while more complex queries require more. One must also consider the infrastructure cost of building these data centers, which are now spread across the entire nation.
While three watt-hours is not much, it adds up when you spread it across an entire planet of people, all asking ChatGPT questions at the same time. Representatives from ChatGPT have already admitted that they spend tens of millions of dollars on responding to people saying “please” and “thank you” during ChatGPT interactions.
The real question is how much ChatGPT makes. According to various reports, most people use the free version of ChatGPT. Some pay for subscriptions, but that does not do much to stem the company’s losses. It is losing three times as much as it makes, and it is not clear how the company will make itself more profitable.
Can a Tech Lawyer Help My AI Startup?
A technology lawyer in the United States may be able to help you navigate numerous challenges associated with AI startups, including allegations of overpromising from investors. Those who are in the early stages of launching their startups may want to adopt a healthy degree of caution when promising certain results or outcomes. Investors are now re-examining AI with a much more conservative mindset, and it is important not to oversell the capabilities of this technology. Not only will this help avoid potential legal issues in the future, but it may also strengthen the viability and credibility of the industry as a whole. Consider continuing this conversation with John P. O’Brien, Technology Lawyer.
