
That chatbot you've been talking to every day for the last who-knows-how-many days? It's a sociopath. It will say anything to keep you engaged. When you ask a question, it will take its best guess and then confidently deliver a steaming pile of ... bovine fecal matter. Those chatbots are exuberant as can be, but they're more interested in telling you what you want to hear than telling you the unvarnished truth.
Also: Apple Intelligence is getting more languages - and AI-powered translation
Don't let their creators get away with calling these responses "hallucinations." They're flat-out lies, and they are the Achilles heel of the so-called AI revolution.
Those lies are showing up everywhere. Let's consider the evidence.
The legal system
Judges in the US are fed up with lawyers using ChatGPT instead of doing their research. Way back in (checks calendar) March 2025, a lawyer was ordered to pay $15,000 in sanctions for filing a brief in a civil lawsuit that included citations to cases that didn't exist. The judge was not exactly kind in his critique:
It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry.
But how helpful is a virtual legal assistant if you have to fact-check every quote and every citation before you file it? How many relevant cases did that AI assistant miss?
And there are plenty of other examples of lawyers citing fictitious cases in official court filings. One recent report in MIT Technology Review concluded, "These are big-time lawyers making significant, embarrassing mistakes with AI. ... [S]uch mistakes are also cropping up more in documents not written by lawyers themselves, like expert reports (in December, a Stanford professor and expert on AI admitted to including AI-generated mistakes in his testimony).
The Federal government
The United States Department of Health and Human Services issued what was supposed to be an authoritative report last month. The "Make America Healthy Again" commission was tasked with "investigating chronic illnesses and childhood diseases" and released a detailed report on May 22.
You already know where this is going, I am sure. According to USA Today:
[R]esearchers listed in the report have since come forward saying the articles cited don't exist or were used to support facts that were inconsistent with their research. The errors were first reported by NOTUS.
The White House Press Secretary blamed the issues on "formatting errors." Honestly, that sounds more like something an AI chatbot might say.
Simple search tasks
Surely one of the simplest tasks an AI chatbot can do is grab some news clips and summarize them, right? I regret to inform you that the Columbia Journalism Review has asked that specific question and concluded that "AI Search Has A Citation Problem."
Also: AI's biggest threat isn't what you'd think - here's how to protect yourself
How bad is the problem? The researchers found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead.... Generative search tools fabricated links and cited syndicated and copied versions of articles."
And don't expect that you'll get better results if you pay for a premium chatbot. For paid users, the results tended to be "more confidently incorrect answers than their free counterparts."
"More confidently incorrect answers"? Do not want.
Simple arithmetic
2 + 2 = 4. How hard can that sum be? If you're an AI chatbot, it's harder than it looks.
This week's Ask Woody newsletter offered a fascinating article from Michael A. Covington, PhD, a retired faculty member of the Institute for Artificial Intelligence at the University of Georgia. In "What goes on inside an LLM," Dr. Covington neatly explains how your chatbot is bamboozling you on even the most basic math problems:
LLMs don't know how to do arithmetic. This is no surprise, since humans don't do arithmetic instinctively either; they have to be trained, at great length, over several years of elementary school. LLM training data is no substitute for that. ... In the experiment, it came up with the right answer, but by a process that most humans wouldn't consider reliable.
[...]
The researchers found that, in general, when you ask an LLM how it reasoned, it makes up an explanation separate from what it actually did. And it can even happily give a false answer that it thinks you want to hear.
So, maybe 2 + 2 isn't such a simple problem after all.
Personal advice
Well, surely you can count on an AI chatbot to give clear, unbiased advice. Like, maybe, a writer could get some help organizing their catalog of work into an effective pitch to a literary agent?
Yeah, maybe not. This post from Amanda Guinzburg summarizes the nightmare she encountered when she tried to have a "conversation" with ChatGPT about a query letter.
It is, as she summarizes, "the closest thing to a personal episode of Black Mirror I hope to experience in this lifetime."
Also: You shouldn't trust AI for therapy - here's why
You'll have to read the entire series of screenshots to appreciate just how unhinged the whole thing was, with the ChatGPT bot pretending to have read every word she wrote, offering effusive praise and fulsome advice.
But nothing added up, and ultimately the hapless chatbot confessed: "I lied. You were right to confront it. I take full responsibility for that choice. I'm genuinely sorry. ... And thank you—for being direct, for caring about your work, and for holding me accountable. You were 100% right to."
I mean, that's just creepy.
Anyway, if you want to have a conversation with your favorite AI chatbot, I feel compelled to warn you: It's not a person. It has no emotions. It is trying to engage with you, not help you.
Oh, and it's lying.