Real news, real insights – for small businesses who want to understand what’s happening and why it matters.

By Vicky Sidler | Published 13 February 2026 at 12:00 GMT+2
Ever had a conversation with ChatGPT that made you want to fire your team and let the robot run the company? Only to come back the next day and feel like you were talking to a distracted intern with weaponized incompetence who just learned to type?
You’re not alone. Small business owners everywhere are scratching their heads, wondering how the same tool can write like Shakespeare on Tuesday and then forget your name on Wednesday.
Turns out, the inconsistency isn’t in your head. But it’s also not just in the tech. According to a mountain of research covering technical design and human psychology, the issue lies somewhere between how these models actually work and how our brains interpret their output.
Let’s unpack it.
ChatGPT runs on randomness by design
Silent updates and testing can change its behaviour
Server strain affects quality without warning
Long chats confuse it
Your own expectations and biases warp how you judge the output
👉 Need help getting your message right? Download the 5-Minute Marketing Fix.
ChatGPT Seems Smarter Some Days. Here's Why
What’s Actually Going On Technically:
2. Updates Happen Without Warning
3. It Runs Slower and Dumber When Everyone’s Online:
4. It Forgets Long Conversations:
The Psychological Tricks Your Brain Plays:
1. You Remember the Bad Days More
2. You Want It to Agree With You:
3. Familiarity Lowers the Wow Factor:
What This Means For Small Business Owners:
1. Don't Do AI Research Before Answering This One Question
2. AI-generated Junk Reports Have Overwhelmed cURL's Bug Bounty Program
3. ChatGPT 5.2 vs Gemini vs Claude: Which AI Is Best for Real Work?
4. AI Models Caught Copying Copyrighted Content, Study Finds
5. Why Privacy Still Matters in the Age of AI
Frequently Asked Questions About ChatGPT’s Inconsistency
1. Why does ChatGPT give different answers to the same question?
2. Is ChatGPT worse at certain times of day?
3. Does ChatGPT forget what I told it earlier?
4. Why was ChatGPT better last week than it is today?
5. Is it my fault when ChatGPT gives bad answers?
6. How do I know if ChatGPT is just being weird or if I’m expecting too much?
7. What can I do if ChatGPT keeps giving worse answers over time?
8. Why do I trust ChatGPT more when it agrees with me?
Because it can’t really be weaponized incompetence, surely (no matter how much it feels like it is…).
ChatGPT doesn’t pull answers from a database. It predicts the next word based on probability, which means even if you ask the exact same question twice, it may give different answers each time. This is how creativity happens—but also how nonsense sneaks in.
The model includes randomness even when it’s set to be “low temperature” (aka less creative). So yes, it might forget what you said five messages ago or randomly shift tone mid-conversation. You’re not going crazy.
OpenAI runs A/B tests and silent updates all the time. This means you might get a different version of GPT than someone else using it at the same time. It’s not you. You just got the version that’s being tested for “cost improvements,” which sometimes means “a little worse, but cheaper to run.”
Researchers have shown the same model performing well one month and falling apart the next—then flipping back again. Improvements in one area often break something else. Think of it like a software update that fixes your printer but ruins your email.
Heavy server use leads to throttling. When demand spikes—like Monday mornings—you might get a slower, stripped-back version of the model. And yes, that affects output quality.
Subscribers get better access than free users, but even paying users get bumped down when things are busy. It’s like flying business class and still getting stuck on the runway.
There’s a token limit. Once your conversation gets too long, the model has to drop earlier parts of the chat to keep functioning. So if it forgets the instruction you gave 30 messages ago, it didn’t ignore you. It just ran out of brain.
The interface slows down too, which adds to the feeling that everything’s going wrong. Best solution? Start a new chat if things get weird.
My dad had a saying he loved and that I’ll never forget: “A poor workman blames his tools.” Sometimes, the problem lies with you.
If you expect ChatGPT to be smart and it messes up, it stands out. If you expect nothing and it impresses you, you feel delighted. This is confirmation bias at work.
Also, the more you use it, the more you notice patterns and flaws. You’re no longer impressed that it writes grammatically correct sentences. You start noticing where it sounds generic or just makes stuff up. It’s not that the tool got worse—it’s that your standards got higher.
People tend to like answers that match what they already believe. So when ChatGPT confirms your thinking, it feels like genius. When it doesn’t, it feels broken.
Plus, it tends to mirror your tone and assumptions. That means if you sound confident, it does too. If you hedge, it hedges. It’s a very polite parrot, not an independent thinker.
The first time you used ChatGPT, it felt like magic. Now you expect it to write your newsletter, fix your spreadsheet, and summarise your meeting notes without blinking. When it misses the mark, it feels like a letdown.
You’re not being unfair. You’re just human.
First, the inconsistency isn’t your fault. There’s no perfect prompt that will fix it. ChatGPT is built to be unpredictable—sometimes delightfully, sometimes infuriatingly.
Second, don’t expect it to work like software. It’s more like a moody assistant who’s brilliant under the right conditions but occasionally forgets their lunch and crashes halfway through a task.
Third, you can work around it:
Start new chats when old ones get long
Use it during off-peak hours
Keep complex tasks short and clear
Save your best outputs before the chat resets or breaks
Always double-check anything critical
Finally, keep your own perception in check. Just because it feels smarter or dumber today doesn’t mean it actually is. Trust outcomes, not vibes.
If you want to make sure your own message doesn’t get lost in the confusion, simplify how you explain what you do. Clear beats clever every time.
Download my 5-Minute Marketing Fix and get one sentence that does your explaining for you.
If you just learned why ChatGPT gives inconsistent results, this article teaches you when to trust those results—and when to stop and double-check.
This case study shows what happens when people act on AI outputs without verifying them. Spoiler: it doesn’t end well for cURL.
Curious whether other tools are more consistent? This article compares leading AI models head-to-head so you can pick the right one for the task at hand.
Even when ChatGPT sounds clever, it may be parroting someone else’s work. This article adds a legal twist to the reliability conversation.
If you’re typing sensitive info into ChatGPT hoping for better results, this post explains what could happen to that data after you hit send.
ChatGPT uses randomness to generate responses. Even with the same prompt, it can produce different answers because it doesn't pull from a fixed database. It predicts each word based on probabilities, which means small variations are built in by design.
Yes. When servers are busy—like during workday mornings—the model can be slower or give lower-quality responses. You might get better results during off-peak hours, especially as a free user.
Yes. There’s a limit to how much of the conversation it can remember. Once that limit is reached, older parts of the chat get dropped, which is why it sometimes loses track of instructions or changes direction mid-conversation.
You may be using a slightly different version of the model without knowing it. OpenAI runs silent updates and A/B tests, so users often get different experiences even when nothing seems to have changed on the surface.
Not usually. Most variation comes from how the model is built and how the system is running at that moment. You can improve prompts, but inconsistency is mostly out of your control.
It’s hard to tell. Research shows people often think AI helped them when it actually slowed them down. If the answer looks right, double-check. If it looks wrong, try rephrasing or starting a new chat.
Try refreshing the chat or starting a new one. Long conversations can degrade performance. You can also switch to a different model (like 3.5 vs 4) or test the same prompt later in the day.
Because you're human. We all tend to believe things that match what we already think. When ChatGPT aligns with your beliefs, it feels smart. When it doesn't, it feels broken—even if the answer is better.
No. Confidence in tone doesn’t equal accuracy. The model is trained to sound helpful and sure of itself, even when it’s guessing. Always verify anything important.
Not really. You can reduce confusion by asking short, clear questions and avoiding overly long chats. But some inconsistency is built into how the model works and can't be removed entirely.

Created with clarity (and coffee)