JOHN P. O’BRIEN, TECHNOLOGY ATTORNEY

Meta Accused of Flooding Investigators With Tips That “Lack Quality”

One of the most interesting areas of AI adoption is law enforcement. However, this has also been one of the most challenging niches for AI startups to exploit. The latest headache for law enforcement agencies involves Meta’s AI software’s interactions with the US Internet Crimes Against Children (ICAC) task force. The ICAC is complaining that although Meta is providing them with thousands of child abuse tips per month, most of them are complete “junk” and “lacking in quality.” It has reached the point where the Department of Justice is now wasting considerable resources investigating these “junk tips,” and officials are not happy. What does this tell us about AI in law enforcement? Should entrepreneurs consider the legal implications of AI startups that specifically serve law enforcement agencies?

Child Abuse Task Force Accuses Meta of “Draining Resources”

During Meta’s ongoing jury trial over child abuse on its platform, a special agent for the ICAC testified that they had been receiving problematic “tips” from Meta’s AI software. Another ICAC representative testified that the task force receives thousands of tips each month from Meta, but these tips “lack quality” and overwhelm their agents.

These tips might include redacted images, videos, and text. This is particularly frustrating for agents because they know that a crime has occurred, and yet they lack any information that would help them investigate further.

During the trial, the court saw internal Meta documents that touched on the company’s encryption of messages. The internal documents stated that this encryption made it impossible to provide any real information to law enforcement regarding 600 child exploitation cases, almost 1,500 sextortion cases, over 150 terrorist incidents, and almost 10 school shooting threats.

What makes this particularly concerning for law enforcement is the fact that Meta is relying on AI software to report child abuse and other crimes that might occur on its platform. If The Guardian is right in characterizing these reports as “useless,” then one has to wonder whether Meta is digging itself into an ever-deepening hole with its continued reliance on AI. If AI caused a child abuse lawsuit for the company while generating reports that are frustrating law enforcement instead of making things better, is AI really helping?

This Is Not the First Time AI Has Frustrated Law Enforcement

Meta’s story is hardly unique, and there are countless other stories of frustration among law enforcement agencies that adopt AI. In February of 2026, a police department in Minnesota sparked controversy when it started generating police reports with AI software. City officials say this will save the police department time and money, but advocacy groups aren’t so sure about the ethics.

These groups say that the software in question has been designed specifically to avoid audits by the public. According to some reports, the software is “impossible to audit” because of the way it was designed.

Another troubling story involves ICE. According to Futurism, ICE used AI software to sort ICE applicants and determine whether they were qualified to serve in the agency. Unfortunately, the software seems to have automatically approved anyone who used the word “officer” in their resume, whether they had had past experience as a law enforcement officer or not.

This allegedly led to the hiring of mall security officers and those who used the word “officer” in passing on their resume. Some were sent to training despite clearly being physically unfit to perform any active work, let alone law enforcement. Others allegedly cannot read or write English.

Even when AI companies attempt to take a more ethical stance when interacting with law enforcement agencies, they may still run into legal challenges. In 2025, Ars Technica reported that White House officials were frustrated with Anthropic’s safeguards against domestic surveillance in the United States.

FBI and Secret Service agents have attempted to use Claude for surveillance purposes before quickly learning that they had been “locked out” of the service due to their connection with US law enforcement.

Another report indicates that AI cannot tell the difference between “hate speech” and regular communications. In 2025, AOL reported that major AI platforms are too inconsistent to be reliable when tasked with censoring hate speech. In other words, certain words were able to “slip through” the hate speech filters, while others were censored. Researchers noted that this could create a dangerous impression of bias.

Another notable incident involved a Detroit resident being arrested in front of his kids after an AI facial recognition system falsely identified him as a suspect. This incident happened years ago, and it was only after the individual sued that he discovered AI had been behind his wrongful arrest.

Similar incidents have occurred throughout Detroit, leading some to believe that AI cannot tell black people apart from one another. According to The Conversation, the facial recognition error rate is as high as 35% for some people of color. This is almost like flipping a coin to see who gets arrested.

After a recent school shooting in Canada, OpenAI faced scrutiny when it became clear that the shooter had been banned from ChatGPT. Allegedly, the shooter used ChatGPT to run school shooting scenarios, but OpenAI failed to report this to the Canadian government.

Can a Technology Lawyer in the United States Help Me?

If you are planning on launching an AI startup that serves the law enforcement sector, it is important to keep these kinds of stories in mind. Unlike other industries that might benefit from AI, law enforcement is a high-stakes game. Laws underpin society and ensure justice, and AI should augment this structure instead of weakening it. In worst-case scenarios, AI can hinder law enforcement efforts while raising serious ethical questions among the population. These failures can potentially lead to legal action against law enforcement agencies and the tech companies that provide them with AI solutions. To learn more about these implications, consider speaking with an experienced technology lawyer in the United States. Contact John P. O’Brien today.

About The Author

John P. O'Brien
John O’Brien is an Attorney at Law with 30+ years of legal technology experience. John helps companies of all sizes develop, negotiate and modify consulting contracts, licenses, SOWs HR agreements and other business related financial transactions. John specializes in software subscription models, financial based cloud offerings, and capacity on demand offerings all built around a client's IT consumption patterns and budgetary constraints. He has helped software developers transition their business from the on-premise end user license model to a hosted SaaS environment; helped software develop productize their application and represented clients in many inbound SaaS negotiations. John has developed, implemented and supported vendor lease/finance programs at several vendors. Please contact John for a free consultation if you or the organization you work for is tired of trying to develop, negotiate and/or modify contracts and tech agreements of any type.

No obligation, Always Free Consultation

I am a legal professional specialized in helping companies of all sizes develop, negotiate and/or modify consulting contracts, licenses (in-bound or out-both), SOWs, HR agreements and other business related financial transactions. This experience provides a powerful resource in navigating the challenges tech companies and tech consumers face in growing their business, managing their risks and maximizing their profits.

Address:

76 Ridge Road
Rumson, NJ 07760

Phone:

1+(732)-219-6641
1+(732)-219-6647 FAX

Hours:

Mon-Fri 8am – 5pm