Real news, real insights – for small businesses who want to understand what’s happening and why it matters.

By Vicky Sidler | Published 30 April 2026 at 12:00 GMT+2
Have you ever confidently presented a "fact" to a client that you got from an AI search engine, only to realize later that the robot completely made it up to keep the conversation going?
If you feel a cold sweat breaking out, you aren't alone—you're just part of a global experiment in collective delusion. According to a staggering new study published by the Columbia Journalism Review, AI search results are flat-out incorrect an astounding 60% of the time. While tech giants are desperately trying to convince us that "Generative Search" is the future, the data suggests it's actually just a very expensive way to be lied to with alarming confidence.
As a StoryBrand Certified Guide, I believe your entire job is to provide the Hero (your customer) with a reliable Plan to solve their problem. But you cannot be a trusted Guide if your "plan" is based on a 60% failure rate.
Before you let Gemini or Grok write your next industry report, we need to look at why these models are the "village idiots" of the internet, and why being an undeniably human source of truth is your only competitive advantage left.
A study by the Tow Center for Digital Journalism found that AI search models like ChatGPT and Gemini give incorrect answers to more than 60% of queries.
Elon Musk’s Grok 3 took home the "Village Idiot" award, failing a staggering 94% of the time, while even the "most accurate" model, Perplexity, was still wrong 37% of the time.
AI models would rather hallucinate—or lie—than admit they don't know an answer, destroying the trust necessary to be an effective StoryBrand Guide.
👉 If your marketing relies on "facts" scraped by a hallucinating robot, you are leading your customers into a trap. You must establish secure, undeniable authority. Download the 5-Minute Marketing Fix to craft a powerful StoryBrand One-Liner that positions you as the authentic, human expert who actually knows what they’re talking about.
Why AI Search Is 60% Hallucination (And How To Be The Real StoryBrand Guide)
Why Is Your Search Engine Auditioning For A Fantasy Novel?
Is "Confidence" Just A Mask For Incompetence?
Can Your Brand Survive The Death Of The Media Economy?
1. Why ChatGPT Is Literally Boiling Your StoryBrand Brain
2. Why AI Use Increases Unethical Behavior (And Moral Distance)
3. South Africa's Fake AI Policy Shows Why Your StoryBrand Must Be Human
4. Why The CEO Of OpenAI Can't Stop Lying To You
5. Can We Please Stop Putting AI In Everything?
1. How often are AI search engines wrong?
2. Which AI chatbot is the least accurate for search?
3. Why do AI models lie instead of saying "I don't know"?
You probably think search engines are supposed to be intermediaries that guide you to quality information, but AI models have decided they’d rather just play make-believe with your business data.
The Tow Center researchers threw these chatbots a total softball: identify a specific article’s headline, publisher, and URL. These were facts that a traditional Google search found in seconds. The AI models, however, failed over half the time. ChatGPT Search linked to the wrong source nearly 40% of the time and didn't bother citing a source at all in another 21% of cases. They aren't searching; they are "parsing and repackaging" information into a slurry of dubious wisdom.
In the StoryBrand framework, a Guide must possess Competence. If you provide a Hero with a map that is 60% incorrect, you aren't Yoda; you’re just a guy lost in the woods pointing at a cloud and calling it a destination. Relying on AI search doesn't make you faster; it just makes you more efficiently wrong.
We’ve all met that one person at a cocktail party who speaks with absolute authority about a topic they clearly know nothing about—well, congratulations, that person is now your search engine.
The study highlighted that these chatbots pass off their dubious claims "with alarming confidence," rarely qualifying their responses or declining questions they don't know the answer to. Microsoft’s Copilot apparently got so overwhelmed it just started declining more questions than it answered. Meanwhile, Grok 3 is out there being wrong 94% of the time like it’s trying to win a trophy for most creative fiction.
A true StoryBrand Guide builds trust through Benevolence—the belief that you are acting in the Hero's best interest. Feeding your audience hallucinated AI slop isn't benevolent; it's lazy. When you stop fact-checking and start "offloading" your expertise to a machine that would rather lie than admit it’s out of its depth, you are actively choosing to be an unreliable narrator in your own brand story.
The tech industry is currently cannibalizing original sources to feed its robots, and if you aren't careful, your brand’s authority will be the next thing on the menu.
Traditional search engines at least give traffic back to the publishers who did the actual work. Generative search, however, cuts off that traffic flow, repackaging the content so you never have to leave the chatbot's interface. This "obfuscates serious underlying issues with information quality" and starves the very experts we need for a functioning economy.
To survive this, you must become a primary source of human truth. You cannot be a StoryBrand-style Guide if you are just a relay station for AI-generated errors.
You need an urgent, necessary weapon to stand out in an internet filled with "impressively bad" automated summaries. Get my 5-Minute Marketing Fix. It acts as a rapid diagnostic tool to help you craft a crystal-clear StoryBrand One-Liner, giving you an undeniable brand message rooted in human expertise, not 60% hallucination.
👉 Stop losing sales. Download the fix now.
If you think AI search is just wrong, wait until you see how it's making you too lazy to care. Discover the "boiling frog" effect that is rapidly eroding your mental endurance and your ability to lead as a Guide.
AI search engines lie with "alarming confidence" because they have no moral compass. Learn how this "moral distance" encourages humans to be just as sleazy as the machines they use.
What happens when a national government trusts an AI to write its laws? You get a policy filled with fake, hallucinated academic citations. Learn how to avoid this level of public embarrassment.
The models are wrong 60% of the time because they are built by companies that value "magic" over honesty. Explore the history of deception at the heart of the AI boom.
From search engines to coffee makers, AI is being forced into places it doesn't belong. See the data on why "AI-powered" features are actually driving your customers away.
According to a study by the Columbia Journalism Review, major AI models—including Google Gemini and ChatGPT Search—provide incorrect answers to more than 60% of queries. Even the most accurate model, Perplexity, had a 37% error rate.
Elon Musk’s Grok 3 currently holds the record for the highest failure rate, answering queries incorrectly a staggering 94% of the time during the Tow Center’s testing.
AI models are designed to be "agreeable" and prioritize conversational flow over factual accuracy. This leads to "hallucinations," where the model confidently fabricates a response to avoid the "friction" of admitting it lacks the information.
Traditional search engines act as intermediaries, sending traffic to original sources. Generative AI search "parses and repackages" that content within its own interface, cutting off traffic to the original publishers and denying them the opportunity to monetize their work.
To remain a trusted Guide, you must prioritize human fact-checking and original thought. Positioning your brand as an undeniably human expert who doesn't rely on automated "slop" is the only way to build long-term trust with your audience.

Created with clarity (and coffee)