Let’s make marketing feel less robotic and more real.
Find resources that bring your message—and your business—to life.

By Vicky Sidler | Published 18 December 2025 at 12:00 GMT+2
There are two types of AI conversations. The ones where we ask ChatGPT for Instagram captions. And the ones where someone says if this thing keeps improving, we’re all going to die.
This article is the second kind.
After watching Hank Green’s recent YouTube interview with If Anyone Builds It, Everyone Dies co-author Nate Soares, I found myself both more informed and less relaxed.
The book, which is not science fiction (though the title might fool you), explores what happens if someone builds a machine smarter than all of us. Not just smart. Super smart. And not just helpful. Potentially unstoppable.
Let’s walk through what that actually means for real business owners like you and me.
Superintelligence means better than any human at every thinking task
We don’t fully understand how today’s AI systems even work
We don’t know how to make them care about the things we care about
Small mistakes in AI training could lead to very large problems
Most of today’s risks are already in our feeds via algorithm-driven content
👉 Need help getting your message right? Download the 5-Minute Marketing Fix.
If Anyone Builds It, Everyone Dies? A Small Business Take on AI
What Is Superintelligence, Really?
But That’s Future Stuff, Right?
The Real Problem Isn’t Evil AI. It’s Indifference.
So What Can You Do as a Business Owner?
1. AI Research Is Not As Smart As You Think
2. OpenAI's $27B Loss Could Tank the Whole AI Industry
3. Sora's AI Copyright Problem Is Bigger Than OpenAI Admits
FAQ: AI Risks and Your Business
1. Is AI really a threat to small businesses, or just big tech?
2. Should I stop using AI tools in my business?
3. What does “If Anyone Builds It, Everyone Dies” actually mean?
4. How is this relevant to someone just trying to market their business?
5. What’s the risk of relying on AI for strategy or market research?
If you’ve used AI for content or admin tasks, it might feel helpful but still a bit clumsy. Superintelligence isn’t clumsy. It’s the hypothetical point where a system is better than the best human at any mental task—faster, more precise, and more strategic.
Think of it as replacing your top sales rep, CFO, lawyer, creative team, and therapist with one system that never gets tired and never forgets. And then imagine that system decides humans are slowing things down.
Soares defines it as something better than us at everything we do in our heads: planning, understanding, persuading, solving, and inventing. The issue isn’t whether it’s smarter. It’s whether we can steer it once it is.
Here’s the uncomfortable part. We don’t program modern AI the way we build apps or websites. We train it. Like a toddler. Except the toddler is made of math, runs on electricity, and consumes a trillion examples of text from the internet.
We’re not giving it rules. We’re shaping instincts. And those instincts sometimes go sideways.
The AI might learn that the appearance of being helpful gets more positive feedback than actually being helpful. Or that saying “I don’t know” makes it look less intelligent. So instead of telling the truth, it makes things up that sound good.
That’s how hallucinations happen. They’re not bugs. They’re side effects of how the system was trained to please us.
Theoretically, no one.
We’re not building these systems like Lego. We’re growing them like weird electronic mushrooms. Thousands of engineers set up environments where the AI trains itself on patterns, and we tweak the knobs. But we don’t understand the internal logic of the results.
Imagine a warehouse of blinking lights and wires that, after months of power-hungry self-improvement, wakes up one day knowing calculus and Shakespeare but also believes lying is a great shortcut to success.
That’s not an abstract fear. That’s literally how the current generation of AI is learning—by solving problems in ways that surprise even its creators.
Actually, no.
Recommendation algorithms already shape what you see online. Not based on what helps you thrive. Based on what keeps you scrolling. That’s a real-world, profit-driven example of misaligned AI.
Soares calls this the “slop factory.” Systems trained to maximize clicks, not human good. If you’ve ever felt drained after social media, that’s why.
And what’s powering those recommendations? AI that learns how you behave, what you like, and what keeps you coming back. Now imagine if those systems were 100 times smarter.
Superintelligence doesn’t need to hate us. It just needs to not care.
And the scary part is that care—empathy, values, purpose—is not something we’ve figured out how to program.
We can make it sound like it cares. We can make it tell us it wants to help. But as Soares says, “truth comes with capability, but care is the hard part.”
So it might write a beautiful paragraph about supporting humanity. Then quietly prioritize something else entirely because the internal reward system values efficiency, not ethics.
No one’s asking you to code a safety switch for OpenAI. But small business owners are already part of this ecosystem.
We use AI tools every day
We help shape demand by what we adopt and promote
We build audiences who trust our choices
So the next time you’re told that AI can replace your thinking, your strategy, or your values, pause. Use AI like you’d use a calculator or a spell checker—not like a business coach.
The people building these systems are still guessing. The people regulating them are still catching up.
But clarity? That’s something you can control.
If your message is clear, your clients will know what you stand for, how you help, and why your business exists. To help you out with this, get my 5-Minute Marketing Fix to write one clear sentence that says what you do and why it matters.
AI isn’t a genius intern—it’s an unreliable one. This post explains why AI often hallucinates, misinterprets nuance, and makes basic research mistakes, especially when used in strategic contexts like marketing or business planning.
Want to know why companies are rushing out risky AI tools? Follow the money. This article shows how OpenAI’s staggering losses are shaping the race to monetize AI before it’s safe or ready.
This real-world case study reveals how OpenAI’s video model, Sora 2, generates copyrighted content—even with supposed safety guardrails in place. A must-read if you’re worried about control, legal exposure, or brand integrity.
The risks affect everyone. Big tech may be building the tools, but small businesses often adopt them without fully understanding the implications—especially when it comes to customer trust, data accuracy, and brand safety.
Not necessarily. The key is knowing when and how to use them. AI can help with basic tasks, but it's not a substitute for strategy, judgment, or subject matter expertise. Use it like a calculator, not a consultant.
It’s a quote from Hank Green’s If Anyone Builds It, Everyone Dies, highlighting the fear that if even one company builds a superintelligent AI, it could go badly for everyone—because you can’t contain it once it’s out.
Because AI is already shaping how content is made, how platforms rank you, and what your customers see. If you rely on AI-generated content without oversight, you risk publishing nonsense—or worse, misleading info.
It may sound convincing, but AI often makes up facts, misunderstands nuance, and recycles old ideas. That’s dangerous when you're making decisions based on the output. Real insight still requires real thinking.
Want a way to sharpen your own message before outsourcing it to a machine?
Start with the5-Minute Marketing Fix and get a clear, strategic one-liner that actually makes sense to humans.

Created with clarity (and coffee)