Real news, real insights – for small businesses who want to understand what’s happening and why it matters.
By Vicky Sidler | Published 16 July 2025 at 12:00 GMT
It finally happened. A judge has had enough of fake AI-generated nonsense showing up in legal documents.
You know we’ve entered strange territory when judges have to remind lawyers not to quote imaginary court cases. And yet, here we are.
The High Court of England and Wales just warned lawyers: if you quote ChatGPT like it’s a law library, expect consequences ranging from fines to a very awkward police chat.
Before you breathe a sigh of relief thinking this doesn’t apply to you—if you’re using AI to write blogs, reports, or anything for public consumption, it very much does.
UK courts say lawyers using AI must fact-check everything with credible sources.
Two recent cases included dozens of fake legal citations—none real, none helpful.
Judge says AI “can produce confident assertions that are simply untrue.”
Sanctions can include public shaming, fines, contempt charges, or worse.
For business owners, the lesson is simple: AI is a tool, not a truth machine.
Need help getting your message right? Grab my free 5-Minute Marketing Fix. It’s fast, easy, and helps you build trust.
Judge Victoria Sharp didn’t mince words. Generative AI tools like ChatGPT, she said, “are not capable of conducting reliable legal research.”
Generative AI isn’t evil. It’s not trying to trick you. But it is trained to sound right—not to be right.
It can sound smart. It can sound certain. But as any lawyer who just submitted 18 non-existent case references can tell you—it can also be confidently wrong.
It’s like asking your cousin Dave for tax advice after his third beer. He’ll say it with flair, but you’re still getting audited.
That’s because tools like ChatGPT don’t “know” things. They predict words based on patterns in their training data. So when you ask for a legal precedent—or, say, a blog post, product description, or email campaign—they do their best to sound like what they’ve seen before.
Unfortunately, that sometimes means confidently making things up.
In court, the result is 18 fake citations. In your business? It might be:
Quoting a made-up stat in a blog.
Repeating a myth about SEO.
Offering advice that’s wildly out of date.
Using jargon that sounds smart but confuses real people.
If this sounds like a “lawyer problem,” think again.
Every business owner using AI for marketing, proposals, or strategy docs is walking a similar tightrope.
The temptation is real: pop in a prompt, out comes a polished paragraph. Publish and move on. But here’s what you’re risking:
Misinformation in your content (that someone will eventually call out)
Erosion of trust with your audience
Embarrassment when a customer replies, “That’s not actually true.”
It’s not always about legal consequences. Sometimes it’s about credibility—and that hits revenue faster than any fine.
In one case reviewed by the UK court, a lawyer submitted 45 case citations. Eighteen were fiction. The others? Not remotely relevant.
Another lawyer dropped five case citations that didn’t exist. Her defense? Maybe the citations came from AI summaries “in Google or Safari.”
Translation: “I don’t think I used AI, but who can say for sure?” A bold move.
To be clear, the court decided not to press contempt charges—but stressed that this should not be seen as precedent. Next time? Maybe not so lucky.
Whether you’re a lawyer or a landscaper, the rule is the same: check your sources.
If you’re using AI to draft marketing content, fact-checking is your new part-time job. That might mean:
Googling statistics before you publish them (which means, I guess, that Google isn’t going to die after all).
Checking product or service claims against your actual offer.
Reading aloud what AI wrote to make sure it sounds like you.
Making sure every “expert tip” is actually… helpful.
Even better? Don’t let AI take the first stab. Give it a solid foundation with your own thinking and strategy first.
There’s another lesson here: sounding smart doesn’t mean being helpful.
Judge Sharp wasn’t annoyed because the filings were incoherent. They sounded perfectly legit. They just weren’t true. (And I FEEL her frustration because it’s not as though we don’t pay lawyers enough for their work. Also, lawyers are one of the few industries still making use of librarians who can do the research and fact-checking for them).
This is where small business marketing often stumbles. AI can make you sound clever. But if your messaging is unclear, customers won’t buy—because they don’t get it.
That’s why my favorite antidote to AI bloat is still the basics:
One clear sentence that explains what you do
One real problem you solve
One real reason people should care
If that sounds simple, it’s because it is. But it works—especially when AI tries to take over your voice.
Let’s be honest: most small businesses won’t get dragged into court for quoting fake AI content. But you can lose credibility. You can mislead your audience. And you can confuse people so much they click away and never come back.
Think of trust like a bank account. Every mistake drains it a little. When AI gets it wrong—and you publish without checking—you’re making a withdrawal you didn’t mean to.
Worse, if people start realizing your content is unverified or unoriginal, they’ll stop trusting you altogether. And good luck rebuilding that in an inbox full of unread newsletters.
As a Duct Tape Marketing Consultant and StoryBrand Certified Guide, here’s what I tell clients: AI can help—but only when it’s trained on your message, not the internet’s leftovers.
Start with clarity. Use tools like the 5-Minute Marketing Fix to get your message straight before you hit “generate.” Then use AI to support, not replace, your thinking.
And please—double-check your citations. Whether you’re writing for court or for customers, accuracy still matters.
👉 Want to get clear and build trust fast?Download the 5-Minute Marketing Fix – it’s free, fast, and won’t make things up.
Created with clarity (and coffee)