Real news, real insights – for small businesses who want to understand what’s happening and why it matters.

By Vicky Sidler | Published 28 November 2025 at 12:00 GMT+2
There are moments in business when we google things we probably should not. Legal questions are one of them. If you have ever typed something into ChatGPT like “Do I legally need a contract” or “Can I fire someone for misconduct,” you are in good company.
Many business owners do it because it feels faster, cheaper, and much less painful than hiring a lawyer who charges the price of a small used car to say “it depends.”
AI feels like the smart shortcut. It gives confident answers in seconds. It sounds like a professional. It even formats opinions like real legal documents.
But recent legal cases around the world show a problem. AI is confidently wrong. Not just a little wrong. Not a small typo wrong. We are talking imaginary laws, invented court cases, and fake legal citations that look completely real.
If trained attorneys are being fooled, the rest of us never stood a chance.
The law changes every week but AI cannot access paid legal updates
AI tools only know what exists publicly on the internet
If no one has posted the latest change, AI will not know it
When AI does not know something, it often makes up believable sounding legal information
👉 Need help getting your message right? Download the 5 Minute Marketing Fix.
Why You Can’t Trust ChatGPT, Perplexity or Other AI For Legal Advice
The Law Changes Constantly. AI Does Not.
Outdated Information Is Not the Biggest Problem:
Courts Have Already Started Issuing Penalties:
The Real Risk For Small Business Owners:
Clarity Makes Everything Easier:
1. AI Medical Advice Is Usually Wrong and Sometimes Dangerous
2. Why AI Business Advice Fails Harder Than You Think
3. AI Therapy Is Dangerous Research Shows Why
4. Fake AI Citations Trigger UK Court Warning—Here’s What Small Businesses Should Learn
5. AI Hallucinations in Court—Big Trouble for Legal Trust
Frequently Asked Questions About Using AI For Legal Advice
1. Can I use ChatGPT or Perplexity to answer legal questions?
2. How accurate is AI when it comes to law and compliance?
3. Why do AI tools create fake cases or legal citations?
4. Have courts taken action against AI-generated legal errors?
5. Is it still useful to use AI when preparing for a legal consultation?
6. How do I know if the legal information AI gives me is correct?
7. Is AI getting better at legal interpretation?
8. What is the safest way to use AI for legal research?
9. What should I do if a lawyer relies on AI instead of proper research?
10. Is it worth paying for a real lawyer if AI feels faster and cheaper?
Here is the first and biggest problem. Laws change all the time. Governments release new regulations, amendments, and decisions every week. Some updates are daily. Most are hidden behind expensive legal databases. Lawyers pay to access them. AI tools do not.
Even tools like Perplexity or ChatGPT with web access have the same issue: they pull from publicly available pages. They can’t access paid legal updates or private databases. And they often rely on scraped summaries, not full documents.
Even when they do access something useful, they’re not reading the whole thing. AI responses are limited by token length. That means they’re working with small slices of context. Not the full picture. And they still can’t tell the difference between a peer-reviewed journal and a Reddit rant.
Unless someone posts an accurate legal update online and that update makes it into an indexable source, the AI won’t know it happened. If the post uses unclear language or clickbait framing, the AI may misinterpret it completely.
So at best, these tools give you outdated or partial legal information.
At worst, they just make it up.
When AI does not know the answer, it does not admit uncertainty. Instead, it predicts what a correct answer should look like and generates something that sounds believable.
This is called a hallucination. The polite definition is “a confident answer that is factually untrue.” The less polite definition is “a professional sounding lie without context or intent.”
In legal contexts, hallucinations look like:
Fake court cases
Fake judges
Fake legal rules
Fake citations formatted perfectly
Studies show legal hallucination rates are far higher than other topic categories. Some AI models answer legal questions incorrectly more than half the time.
The worst part is the confidence. The tone alone makes the information feel trustworthy.
To make things sound even more ironic, many of those paid legal research platforms now include their own AI tools. These systems sit behind the paywall and have full access to updated regulations, court rulings, amendments, and case law. You would think that would solve the problem.
It helps, but not fully.
According to my legal-librarian friend, even these specialised AIs sometimes invent things. And when they do not invent information, they sometimes draw the wrong conclusions from very real law. Several attorneys she works with say the same thing. The AI has the right data, but still leaps to the wrong legal meaning.
So if AI with full access to updated legal databases still gets confused, expecting a publicly trained chatbot to interpret the law correctly is like trusting a first year law student to write a Supreme Court judgment because they once watched Suits.
This is not a hypothetical risk. Courts in South Africa, the United States, the United Kingdom, and Australia have already sanctioned lawyers for submitting AI generated legal citations that never existed. Some were fined. Some were ordered to redo their legal training. One even had to take a course on legal ethics, which is probably the legal equivalent of being sent to sit in the naughty corner.
You would think those headlines would have stopped the behaviour.
They have not.
My legal librarian friend still finds hallucinated cases in opposing counsel’s filings today. Lawyers continue submitting AI generated research because it feels faster than doing the actual work or asking their librarians to do it properly. So despite the warnings, courtroom embarrassment, and public legal commentary, many professionals are still rolling the dice and hoping the judge does not notice that the case law they cited exists only in the imagination of a chatbot.
If even the legal industry cannot break the habit, relying on AI for legal accuracy as a small business owner is not confidence. It is wishful thinking.
And here is what makes it worse.
My friend said something once that stuck with me. People do not go to lawyers casually. They go when something is serious. When a contract is falling apart. When someone is being mistreated. When a business is at risk. They hand over a problem that feels overwhelming and trust someone trained, licensed, and experienced to protect them.
For a lawyer to take that trust and quietly hand the work to an AI system that guesses, invents, or draws the wrong conclusions is hard to justify. Especially in a profession that employs trained legal librarians specifically to ensure the research is accurate.
When someone relies on you to protect their livelihood, using AI instead of doing the work is not innovation. It is negligence disguised as efficiency. And honestly, it is hard to forgive.
Legal mistakes are expensive. Very expensive. And the worst part is you usually only discover them when someone else points them out. By then, repairing the damage costs far more than doing it properly in the first place.
And yes, some lawyers are cutting corners and relying on AI. That is its own problem. If you are paying real money, you deserve real expertise. Lawyers charge too much for you to settle for someone outsourcing accuracy to a chatbot.
So if the issue matters, hire someone reputable. Someone who does their own research. Someone who knows the current law because they actively stay updated, not because a robot sounded confident.
Which brings us to the simplest question of all.
Is saving the cost of one consultation worth gambling your business on the free legal advice an AI guessed?
Now, before we throw AI into a volcano and decide it has no value, it is fair to say it does have a place. AI can be useful if you treat it like an assistant with enthusiasm but no accountability.
Here is where AI can genuinely help you:
Understanding terms in plain English
Researching general context
Summarising long explanations
Preparing questions before speaking to a lawyer
But here is where the line is drawn firmly in permanent marker. AI should never be the source of:
What you are legally allowed to do
Whether a contract is enforceable
Whether a policy is compliant
Whether you can hire, fire, or discipline someone under law
Think of AI as a friendly explainer. Not an advisor.
It can help you understand what to ask. It should never be the one giving you the final answer.
Legal misunderstandings often happen because communication is unclear. The same is true in marketing. Confusion creates risk. Clarity builds trust.
If you want support getting your message simple and sharp so clients understand you instantly, start here.
Download my free 5 Minute Marketing Fix and get one clear sentence that describes what your business does and why it matters.
Because in business, clarity protects you far better than confident guessing ever will.
If AI can’t get the law right, wait until you see how it handles your symptoms. Medical hallucinations are even riskier.
Legal decisions aren’t the only place AI can lead you astray. Here’s why blindly following business advice can be just as costly.
Some people turn to AI for comfort. Others for crisis help. This article shows why chatbots should never replace real support.
Since legal hallucinations are central to this topic, this post gives another example of real consequences and what businesses should take away from it.
This post zooms in on the fallout from hallucinated filings in real courts and helps build a fuller picture of why trust matters in high-risk decisions.
You can use AI to understand terminology or get basic context, but it should not be treated as legal advice. AI tools do not have reliable access to current legislation or paid legal databases, and they often produce incorrect or invented answers.
Accuracy is inconsistent. Even legal tools built into paid research platforms have been known to invent cases or misinterpret valid information. Public AI models are even less reliable because they cannot access the latest legal updates behind paywalls.
When AI does not know the answer, it predicts what a correct answer should look like and generates something plausible. This is called a hallucination. The output often sounds confident and professional, which makes it easy to believe.
Yes. Courts in several countries have fined or sanctioned lawyers for submitting filings with fake citations generated by AI. Despite this, some legal professionals are still submitting hallucinated case law in their filings.
Yes. AI can help summarise content, explain terms in plain English, or help you prepare relevant questions before speaking with a lawyer. It should support the conversation, not replace an expert.
You cannot assume it is correct. If the topic has legal consequences, you should verify it with a qualified lawyer or legal professional who is actively staying updated with the current law.
There are improvements, especially with paid platforms adding AI features, but accuracy is far from consistent. Even with full access to updated case law and legislation, AI can still misinterpret meaning or context.
Treat it as a starting point, not a decision-maker. Use it to gather clarity, simplify concepts, or outline what you need to ask. Any legal decision should still be reviewed and confirmed by a professional.
If you are paying for legal advice, you are paying for expertise, accuracy, and judgment. If you suspect someone is outsourcing that responsibility to a chatbot, ask direct questions about how they verified the information or consider getting a second opinion.
If the decision has legal, financial, contractual, or employment consequences, the cost of a real consultation is almost always cheaper than fixing a mistake made from incorrect AI-generated advice.
👉 Need help getting your messaging right?Download the 5 Minute Marketing Fix.

Created with clarity (and coffee)