NEWS, MEET STRATEGY

Real news, real insights – for small businesses who want to understand what’s happening and why it matters.

AI Medical Advice Is Usually Wrong and Sometimes Dangerous

AI Medical Advice Is Usually Wrong and Sometimes Dangerous

November 29, 202510 min read
Custom HTML/CSS/JAVASCRIPT

By Vicky Sidler | Published 29 November 2025 at 12:00 GMT+2

There is a moment most business owners experience. A weird pain, a rash that looks suspiciously like a map of South America, or a late night panic about whether a headache is stress or something catastrophic. Instead of calling a doctor, we open the same browser we use for Netflix and ask an AI chatbot whether we are dying.

Recently, I wrote about why you cannot trust AI for legal advice. The short version was simple. AI guesses confidently, does not know what it does not know, and cannot access current information.

Medical advice has the same problems but the stakes are higher because a wrong answer can hurt you far more than the long arm of the law.

According to recent research and medical guidance databases, there is a growing concern that people trust AI responses more than advice from trained doctors. Confidence does not equal accuracy. In medicine, that difference matters.


TL;DR:

  • AI cannot access paid medical databases or current research

  • Medical guidelines change frequently and AI cannot keep up

  • AI often invents medical facts and sounds confident doing it

  • Incorrect advice can delay treatment or cause harm

👉 Need help getting your message right? Download the 5 Minute Marketing Fix.


Table of Contents:


Why Medical Information Changes Constantly:

Medicine evolves at an overwhelming pace. One estimate says more than seven thousand medical papers relevant to primary care are published every month. Doctors cannot read them all manually, so they use specialised software like UpToDate which continuously reviews and updates information.

UpToDate has more than seven thousand physician authors and monitors nearly five hundred medical journals. Some updates happen multiple times per day when new research comes out. Other trusted databases include PubMed, EMBASE, and the Cochrane Library. These are not free public websites. Doctors and hospitals pay for them because current medical knowledge saves lives.

AI systems do not have access to these paid sources. They work from older, publicly available information. If medical guidance changed after their training period and no one posted about it publicly, the AI will never know.

When AI Gives Yesterday’s Medicine:

AI models have what researchers call a knowledge cutoff. That means their training ended on a certain date. Anything discovered, updated, recalled or approved after that point does not exist in their world.

Even tools that can browse the internet, like Perplexity or certain versions of ChatGPT, still face major limits. They only access what’s publicly available, which means no access to medical databases like UpToDate, PubMed, or Cochrane. They also work within token limits. That means they're not reviewing everything. They're sampling small chunks and trying to summarise them.

Just because a paper is online doesn’t mean AI read the whole thing—or understood it correctly. And it definitely doesn’t mean it picked the best source. It might rank a Reddit thread above a medical journal because the language is more common or the formatting is clearer.

In medicine, that’s a problem. Treatment recommendations shift. Drugs get recalled. Guidelines change fast.

And here's the critical point: AI is not a search engine. It usually doesn't fetch live results. It predicts the next likely sentence based on past patterns. That’s why it sometimes forgets what you told it ten minutes ago. It can’t hold the full conversation, let alone an entire medical database.

So while the AI may sound confident, it might be recommending treatments that were replaced last year. Or it might invent a study that never existed. Neither of those is helpful when your health is on the line.

AI Hallucinations Are Not Harmless:

If the problem stopped at outdated answers, it would already be concerning. Unfortunately AI also does something called hallucinating. That means when it does not know an answer, it creates one that sounds medically plausible.

And there is another layer to this problem. AI scrapes the public internet which means personal blogs, opinions, and half researched theories sometimes get repeated as if they are verified medical facts. The AI cannot tell the difference between a peer reviewed study and a persuasive stranger on a message board with a strong opinion and a username like DetoxWarrior87.

So instead of saying something responsible like “results vary and research is ongoing,” AI will confidently reply as if it is quoting a medical textbook.

Examples include:

  • Fake medical studies

  • Incorrect treatment options

  • Wrong diagnostic criteria

  • Fabricated drug interactions

  • Someone’s opinion stated as clinical truth

One study showed AI chatbots diagnosed medical conditions correctly less than half the time. Another found that eighty eight percent of chatbot medical responses contained false information.

For someone already anxious about health, the confident tone makes the response feel trustworthy. That is the danger. It feels right even when it is wrong.

Real Consequences for Real People:

Medical mistakes do not wait months to reveal themselves. Sometimes the impact is instant. One documented case involved a patient who experienced concerning symptoms after a cardiac procedure. Instead of seeking medical help, they asked an AI tool whether it was serious. The AI suggested the symptoms were normal. Doctors later confirmed it was a stroke which required urgent attention.

Another study found people trusted AI health explanations more than ones written by licensed physicians even when the physician labelled the AI’s answer as low accuracy.

Trusting a system because it sounds smart is not the same as receiving professional care.

Why AI Cannot Replace a Healthcare Professional:

Doctors do more than interpret facts. They examine patients, observe subtle body language, consider history, culture, lifestyle, and context. They apply judgment formed over years of training and experience.

AI cannot touch a patient. It cannot notice a change in skin tone. It cannot hear a tremor in someone's voice or pick up the unspoken fear behind symptoms. It cannot access the latest medical studies behind paywalls.

Most importantly, doctors are accountable for their decisions. AI is not.

So What Should You Use AI For?

At this point you might be wondering whether AI has any useful place in healthcare or whether it belongs somewhere between fortune telling and horoscope apps. The truth sits somewhere in the middle. AI can be helpful if you treat it like an enthusiastic assistant with no medical responsibility and no authority.

AI can help with:

  • Explaining medical terminology in plain English

  • Summarising general context

  • Helping you prepare questions before seeing a doctor

Those uses are similar to reading a medical dictionary or watching a short educational video. They are informational, not diagnostic.

What AI should never do is tell you what condition you have, whether a medication is safe for you, or whether a symptom is serious enough to get checked. Diagnosis requires human judgment, access to current evidence, and sometimes physical examination.

It should never be used to diagnose, prescribe, or decide whether symptoms require care.

Your health and your life deserve more than a confident guess.

If clarity matters in marketing and business, it matters ten times more in medicine. Guessing may work for dinner recipes. It does not work for medical decisions.

If you want clarity in how you communicate with customers and write messages that help people trust you, my free 5 Minute Marketing Fix can help you start small with one sentence that explains what you do and why it matters.

👉 Download it free here.


Related Articles:

1. Why You Can’t Trust ChatGPT, Perplexity or Other AI For Legal Advice

If AI’s medical advice is made up, its legal guesses shouldn’t surprise you. Read how it confidently invents laws that don’t exist.

2. Why AI Business Advice Fails Harder Than You Think

Same confident tone. Same lack of judgment. Find out how AI business advice falls apart under real-world pressure.

3. AI Therapy Is Dangerous Research Shows Why

When health issues are emotional, AI still can’t help. This one covers the real psychological risks of turning to bots for support.

4. AI Slop Is Breaking the Internet—Here’s What Small Brands Can Do

The web is being overrun with confident nonsense. This article explains the consequences and how small businesses can still stand out with real expertise.

5. New MIT Study Links AI Use to Weaker Critical Thinking

If you’re relying heavily on AI to think for you, this research shows what might be happening behind the scenes to your own decision-making ability.


Frequently Asked Questions

1. Can I rely fully on AI tools for research or decision making?

No. AI tools can generate helpful starting points, but they often present guesses or opinions as facts. Always verify the information with credible human sources.

2. Why does AI sometimes give confident but incorrect answers?

AI responds based on patterns in data, not understanding. If the data is incomplete, biased, or unclear, the output can sound certain while being completely wrong.

3. Should small business owners stop using AI entirely?

Not necessarily. AI can save time and support early brainstorming. It works best when paired with human judgement, expertise, and proper verification.

4. How do I check whether an AI response is accurate?

Look for real citations from trusted organisations, recent sources, or subject-matter experts. If the topic affects money, safety, legal decisions, or strategy, verify manually.

5. What types of tasks are safe to automate with AI?

Templates, drafts, outlines, summaries, and simple content ideas are often safe. Anything requiring nuance, compliance, or strategic thinking should be checked by a human.

6. Are some AI tools more reliable than others?

Different tools are trained on different datasets and have different priorities. Reliability varies, so treat all outputs as a starting point, not the final answer.

7. Is AI becoming more accurate over time?

Accuracy is improving, but so are the risks. As models grow, hallucinations become harder to spot because the responses sound more believable.

8. How can I use AI without sounding generic or robotic?

Use AI for structure or rough drafts, then rewrite in your own tone. Real experience and examples make content sound human.

9. What’s the biggest mistake people make when using AI?

Assuming the tool thinks or cares. AI doesn’t know whether it is right or wrong. It generates text, not truth.

10. Where should I be most cautious with AI?

Legal advice, financial decisions, healthcare, compliance, and anything where a mistake could cause harm or cost money. Always involve a qualified human in these cases.

👉 Need help getting your messaging right?Download the 5 Minute Marketing Fix.

blog author image

Vicky Sidler

Vicky Sidler is a seasoned journalist and StoryBrand Certified Guide with a knack for turning marketing confusion into crystal-clear messaging that actually works. Armed with years of experience and an almost suspiciously large collection of pens, she creates stories that connect on a human level.

Back to Blog
Strategic Marketing Tribe Logo

Is your Marketing Message so confusing even your own mom doesn’t get it? Let's clarify your message—so everyone wants to work with you!

StoryBrand Certified Guide Logo
StoryBrand Certified Guide Logo
Duct Tape Marketing Consultant Logo
50Pros Top 10 Global Leader Award
Woman Owned Business Logo

Created with clarity (and coffee)

© 2025 Strategic Marketing Tribe. All rights reserved.

Privacy Policy | Terms of Service | Sitemap