NEWS, MEET STRATEGY

Real news, real insights – for small businesses who want to understand what’s happening and why it matters.

Don't Do AI Research Before Answering This One Question

Don't Do AI Research Before Answering This One Question

January 30, 202610 min read
Custom HTML/CSS/JAVASCRIPT

By Vicky Sidler | Published 30 January 2026 at 12:00 GMT+2

There’s something about the way AI answers questions that makes you want to believe it. It’s fast. It’s clean. It doesn’t mumble, hesitate, or ask for clarification. It just goes. Like a teenager with a driver’s license and no insurance.

This is how we ended up with a chatbot that promised a $1 Chevy Tahoe. A chatbot that lied about Air Canada’s bereavement policy. A chatbot that confidently cited imaginary court cases—and got a real lawyer fined. Not because the technology glitched. But because it’s doing exactly what it was designed to do.

AI isn’t trained to find the truth. It’s trained to always give an answer.

Before you use it to make business decisions, there’s one question worth asking.


TL;DR:

  • AI fills in blanks with confident guesses, not verified facts

  • If the answer could cost you money or damage your business, use a professional

  • AI is great for curiosity, bad for consequences

👉 Need help getting your message right? Download the 5-Minute Marketing Fix.


Table of Contents:


The One Question That Actually Matters:

How much does it matter if this answer is wrong?

That’s it. That’s the test. If the answer to your question won’t change anything meaningful—if it’s trivia, background, or curiosity—then AI is fine. Ask away.

But if the answer could affect your money, your reputation, your strategy, or your legal standing, then using AI as your source is not research. It’s gambling.

And the odds are not in your favour.

AI Is Great at Low-Stakes Curiosity:

To be clear, I use AI all the time. I asked it last week whether wild dogs develop the same personalities as domestic dogs, or if humans bring that out in them. It gave me a poetic, slightly rambling answer that had no impact on my life or income. But it scratched the itch of curiosity, which was all I wanted.

That’s where AI shines. Random questions. Weird hypotheticals. "What if" scenarios that don’t carry consequences. It’s a fast-thinking companion for idle thoughts, and when the outcome doesn’t matter, its confidence is charming rather than dangerous.

It’s also surprisingly good for brainstorming. When you feed it your half-formed thoughts, it hands them back in different words. Sometimes that shift in language is all you need to see the idea more clearly. Not because it’s smarter, but because it mirrors your thinking in a new shape.

But many people don’t change gears when the stakes get higher. They treat a chatbot’s response to a business-critical problem the same way they treat its thoughts on sourdough starters.

That’s where things go sideways.

Why AI Gets It Wrong Without Warning:

Large language models don’t think. They predict. When you ask a question, the model doesn’t search a database or verify facts. It simply guesses the next most likely words based on patterns in its training data. That’s it.

If it doesn’t know the answer, it still responds. Not with “I don’t know,” but with something that looks like an answer. This is called a hallucination—a confident, plausible-sounding lie generated with the same tone and formatting as the truth.

Even when AI tools pull information from the internet, they don’t always understand what they’re referencing. I’ve seen Perplexity hallucinate sources and make up citations that simply don’t exist—while claiming to be purpose-built for research. It can treat a Reddit thread with the same seriousness as a peer-reviewed paper. And unless you’re double-checking those sources like your credibility depends on it, you may not even notice.

I only caught these hallucinations because I’ve become obsessive about checking sources. I’ve been burned. Most people don’t check with that level of rigor. They assume if there’s a link, it must be legit.

Even worse, AI doesn’t reference everything it’s seen every time it gives you an answer. That would be far too expensive, both in time and computing power. If you’ve ever had a chatbot forget what you told it earlier in the conversation, you’ve seen this in action. It’s not checking the entire message history. It’s skimming and guessing.

So if it can’t remember a chat thread from fifteen minutes ago, do you really think it’s remembering every best practice from your industry? Or every relevant fact from your business?

The Cost of a Confident Wrong Answer:

Let’s say your sales drop. You ask AI what’s wrong, and it points to your website—blaming poor user experience, unclear calls to action, or slow mobile load times. Sounds plausible. So you redesign your homepage, tweak your buttons, and rework your layout.

But the real issue? A competitor launched an aggressive campaign that’s eating everyone’s lunch, not just yours. Your SEO rankings didn’t change. Your messaging didn’t suddenly get worse. You just got outpaced—something the AI never mentioned because it had no idea what was happening outside your site.

This kind of misdiagnosis happens more than people think. When an AI is trained on marketing articles, it sees marketing clichés. If it’s seen a thousand posts about homepage redesigns, that’s where it sends you. It can’t assess your situation. It just offers whatever patterns it's been exposed to the most.

What Professionals Do That AI Cannot:

Here’s what human experts bring to the table:

1. They ask why.

They don’t stop at the first answer. They dig. They challenge assumptions. They check whether the root problem is upstream from the one you noticed.

2. They’re accountable.

If a consultant gives bad advice, they answer for it. If an AI does, it just generates more text.

3. They understand nuance.

Professionals don’t just see data points. They interpret them in your real-world context. They know when a red flag is urgent and when it’s background noise.

4. They say “I don’t know.”

And they mean it. Good advisors pause when they need more info. AI never does. It will answer any question, right or wrong.

5. They care if they’re wrong.

Because their reputation, referrals, and liability depend on it.

This is what you’re paying for when you hire someone: the judgment to know when to say something, and the wisdom to know when not to.

A Simple Rule You Can Use:

Here’s the framework I now use myself.

If getting the answer wrong would cost me money, time, or credibility, I get a human involved.

That includes legal, financial, medical, hiring, pricing, policy, or strategy questions. Anything that touches compliance, risk, or customer trust.

If the answer doesn’t matter, AI is fine. Use it for first drafts, quick explanations, and questions you’d feel comfortable getting wrong at a dinner party.

The line is sharper than you think. And once you see it, you won’t be able to unsee it.

The Real Risk Isn’t AI. It’s You Believing It Without Question.

The real danger isn’t that AI gives wrong answers. It’s that it does so with such polish and confidence that you don’t think to double check.

The airline didn’t mean to lie to its customer. The dealership didn’t mean to offer a luxury vehicle for a dollar. The lawyer didn’t mean to cite imaginary cases. But they all trusted the system’s output more than they trusted their own gut.

So before you trust it with your next big decision, ask yourself one thing.

If this answer is wrong, what happens?

That question could save you more than a few headaches. It could save your business.

Want help making sure your message is clear, confident, and human? Download the 5-Minute Marketing Fix.

👉 Download it free here.


Related Articles:

1. Why AI Business Advice Fails Harder Than You Think

If you liked the judgment framework in this article, this one shows what happens when real businesses follow AI advice blindly—and how much it costs them.

2. AI Hallucinations in Court—Big Trouble for Legal Trust

Think only amateurs get fooled by AI? Here's what happened when trained legal pros submitted hallucinated case law in real courtrooms.

3. Fake AI Citations Trigger UK Court Warning—Here's What Small Businesses Should Learn

The UK's highest courts are warning professionals not to trust generative AI. If that doesn't give you pause, it should.

4. AI Is Making Big Decisions in South Africa Without You

Even if you use AI carefully, others might not. This piece explains how AI is already being used by banks, insurers, and SARS to assess your business.

5. Why Privacy Still Matters in the Age of AI

You’ve learned how AI can give bad advice—but what happens to the data you share? This article explores the hidden cost of trusting chatbots with private business info.


Frequently Asked Questions About Using AI for Business Research

1. Can I trust ChatGPT or Perplexity to do business research for me?

Not fully. These tools give fast, confident answers, but they’re not always accurate. They can guess, misinterpret data, or even make things up.

2. How do I know if AI has given me the wrong answer?

You often won’t—until it’s too late. Unless you double-check the sources yourself or already know the topic well, it’s hard to spot mistakes.

3. Why does AI sound so confident even when it's wrong?

Because it’s trained to predict what sounds right, not what is right. It generates likely-sounding text, not verified facts.

4. What types of business questions are safe to ask AI?

Low-stakes ones. Definitions, quick brainstorming, summaries, or help rewording ideas. Anything that won’t cost you money or reputation if it’s wrong.

5. When should I avoid using AI for advice?

Any time the answer affects legal decisions, strategy, pricing, contracts, customer trust, compliance, or spending—talk to a human expert.

6. Can AI hallucinate sources or make up references?

Yes. Even research-focused tools like Perplexity can cite fake studies or misrepresent what a source says. Always check the links and context.

7. Does AI look at everything before giving an answer?

No. It doesn’t access your full chat history or search the whole internet. It predicts based on limited context and past training data.

8. Why do different AI tools give different answers to the same question?

Because they pull from different sources and use different training methods. The result depends on timing, phrasing, and even model behaviour.

9. Is AI useful for brainstorming ideas?

Yes. It can help reframe your thoughts, offer variations, or speed up early drafts. Just don’t treat its ideas as expert advice without checking.

10. What’s one question I should ask before using AI research?

“If this answer is wrong, what happens?” If the risk is real, don’t rely on AI alone—get professional advice.

blog author image

Vicky Sidler

Vicky Sidler is a seasoned journalist and StoryBrand Certified Guide with a knack for turning marketing confusion into crystal-clear messaging that actually works. Armed with years of experience and an almost suspiciously large collection of pens, she creates stories that connect on a human level.

Back to Blog
Strategic Marketing Tribe Logo

Is your Marketing Message so confusing even your own mom doesn’t get it? Let's clarify your message—so everyone wants to work with you!

StoryBrand Certified Guide Logo
StoryBrand Certified Guide Logo
Duct Tape Marketing Consultant Logo
50Pros Top 10 Global Leader Award
Woman Owned Business Logo

Created with clarity (and coffee)

© 2026 Strategic Marketing Tribe. All rights reserved.

Privacy Policy | Terms of Service | Sitemap