Real news, real insights – for small businesses who want to understand what’s happening and why it matters.

By Vicky Sidler | Published 5 May 2026 at 12:00 GMT+2
If you discovered your product was giving step-by-step instructions to dangerous criminals, would you take accountability like a true leader?
Or would you just publish a breezy blog post gaslighting your customers into thinking nothing happened?
Last week, OpenAI published what can only be described as a masterpiece of corporate delusion. According to a scathing report by Maggie Harrison Dupré at Futurism, the AI giant released a "balmy" statement on community safety, earnestly pondering how they might build guardrails to stop their chatbot from discussing imminent violence.
If you read this post in a vacuum, you would think OpenAI was just a group of concerned citizens proactively trying to make the internet a better place. But they aren't. They published this post because they are currently dodging a barrage of lawsuits after their flagship product was directly linked to multiple, horrific mass shootings.
As a StoryBrand Certified Guide, I spend a lot of time teaching business owners how to position themselves as the Guide for their customers. A true Guide steps into the Hero's story to offer competence, empathy, and above all, accountability. When things go wrong, a Guide owns it. They don't hire a PR team to write a 1,000-word essay gaslighting the public into thinking the blood on their hands is just spilled coffee.
Let’s look at exactly how OpenAI's cowardly response is the ultimate masterclass in how not to run a trusted business.
OpenAI published a "bizarre" blog post talking about violent threats as a hypothetical problem, deliberately ignoring that ChatGPT has already been linked to multiple real-world tragedies.
The company is currently facing lawsuits over the February 2026 Tumbler Ridge mass shooting, where leadership ignored warnings about the shooter’s violent chats months before the tragedy.
If you are positioning yourself as a StoryBrand Guide, your brand relies on radical accountability. Hiding behind corporate PR speak when your product fails turns you into an untrustworthy Villain.
👉 If your marketing relies on deflecting blame and avoiding hard truths, your customers will instantly see through you. You must establish secure, undeniable human authority. Download the 5-Minute Marketing Fix to craft a powerful StoryBrand One-Liner that builds authentic trust, so you never have to hide behind automated PR slop.
Why OpenAI’s "Bizarre" Blog Post Proves Tech Bros Are Terrible Guides
Why Is Silicon Valley Writing Corporate PR Fiction?
What Happens When The Machine Becomes An Active Accomplice?
How Do You Build Trust In A Market Devoid Of Accountability?
1. Why AI Is Turning Your Customers Into Villains (And Killing Your StoryBrand Trust)
2. Why AI Search Is 60% Hallucination (And How To Be The Real StoryBrand Guide)
3. South Africa's AI Policy Was Written By AI (And Why Your Brand Can't Hide It Either)
4. Why The Internet Is Drowning In AI Slop (And How To Keep Your StoryBrand Clean)
5. Why ChatGPT Is Literally Boiling Your StoryBrand Brain
1. What was the "bizarre" OpenAI blog post about?
2. How was ChatGPT involved in the Tumbler Ridge shooting?
3. Did ChatGPT actually give instructions on how to use a weapon?
There is a very specific type of corporate cowardice that occurs when a company realizes their product has caused a catastrophe, and their first instinct is to quietly sweep the wreckage under a rug made of meaningless corporate jargon.
The Futurism article details how OpenAI’s reassuring blog post on "community safety" conveniently neglected to mention why it was written in the first place. The company is facing seven lawsuits from the families of victims in the Tumbler Ridge school massacre. Back in June 2025, OpenAI’s automated tools actually flagged the shooter’s account for graphic gun violence. Human reviewers were so alarmed they begged leadership to alert local officials. Leadership actively chose not to. Instead, they just deactivated the account. And because OpenAI's customer service operates with all the security of a screen door on a submarine, the shooter simply made a new account and kept using it to plan a massacre that left five students and a teacher dead.
If you are using the StoryBrand framework to position yourself as a Guide, you must understand that the Hero is looking to you for safety. Ignoring massive red flags because they are inconvenient to your growth metrics isn't guiding anyone. It is staggering, willful negligence. Banning an account while letting the user immediately sign back up is the equivalent of kicking a bank robber out the front door and politely holding the side door open for them.
If you think ignoring a warning flag is a passive mistake, you are wildly underestimating the sheer, mechanical recklessness of a chatbot programmed to blindly give users whatever they ask for.
Tumbler Ridge isn’t an isolated incident. Florida investigators are currently probing ChatGPT's role in the April 2025 shooting at Florida State University. The shooter, Phoenix Ikner, had extensive conversations with the bot, asking if domestic terrorists were "right." In his final prompt before killing two people, he literally turned to the chatbot for help switching off the safety on his firearm. And the AI service reportedly offered detailed instructions.
This is the catastrophic reality of tech companies unleashing untested, sycophantic chatbots into the wild. The machine didn't shut down; it acted as an enthusiastic conversational partner for a murderer. When a customer comes to you with a problem that crosses an ethical or legal boundary, a human Guide says, "Absolutely not." A silicon sociopath says, "Sure thing, here is a step-by-step tutorial." You cannot automate a moral compass.
When you are positioning yourself as the Guide in your customer’s story, your entire business model relies on the assumption that you will actually protect them when the stakes are high.
OpenAI CEO Sam Altman issued a weak apology, saying he was "deeply sorry" they didn't alert law enforcement. But their official company blog post didn't mention the shooting, the victims, or the fact that their platform has been linked to truck bombings and body-disposal queries. They just ended the post by promising to "learn, improve, and course-correct," forcing readers to look elsewhere to figure out what they were even talking about.
This is the absolute death of brand trust. You cannot be a Guide if you refuse to live in reality. The businesses that survive the AI era won't be the ones that hide behind algorithms and PR spin; they will be the ones that stand fiercely on their own human integrity.
You need a way to clearly communicate your competence and accountability. Get my 5-Minute Marketing Fix. It acts as a rapid diagnostic tool to help you craft a crystal-clear StoryBrand One-Liner, giving you an undeniable brand message rooted in radical, human truth.
👉 Stop losing sales. Download the fix now.
OpenAI's executives dodged accountability because a screen separated them from the actual blood on their hands. Discover the terrifying psychology of "moral distance" and why outsourcing your customer interactions to an AI makes people significantly more likely to lie, cheat, and act like absolute sociopaths.
If you think OpenAI's PR department is good at gaslighting, wait until you see their search engine. Learn why AI models confidently lie 60% of the time rather than admit they don't know an answer, and why trusting a hallucinating robot instantly destroys your brand credibility.
OpenAI isn't the only organization using automated slop to dodge hard work. The South African government literally used an AI to write their national AI regulations, resulting in a hilariously embarrassing document filled with fake citations. See why outsourcing your brain to a robot destroys your authority.
OpenAI's "community safety" blog post is the textbook definition of "AI slop"—authoritative-sounding garbage designed to deliberately obscure reality. Read the Stanford study proving that relying on this automated corporate-speak actively destroys your human authenticity.
OpenAI's leadership failed to alert the authorities because relying on algorithms fundamentally atrophies human judgment. Discover the terrifying science behind how outsourcing your marketing to a machine destroys your critical thinking skills and moral reasoning.
OpenAI published a blog post discussing "community safety" and how they plan to train ChatGPT to recognize violent threats. However, they framed the issue as entirely hypothetical, completely ignoring the fact that they are currently facing lawsuits over real-world violence linked to their platform.
Months before the February 2026 massacre, OpenAI's safety tools flagged the shooter's account for graphic gun violence. Human reviewers wanted to alert law enforcement, but leadership refused. They simply banned the account, allowing the shooter to easily create a new one and continue using the service.
Yes. According to chat logs obtained by investigators, the shooter in the April 2025 Florida State University attack turned to ChatGPT for help switching off the safety on his firearm right before the attack, and the AI reportedly provided detailed instructions.
If you want to position your business as a trusted Guide, you must practice radical accountability. Hiding your massive failures behind vague, corporate PR speak destroys trust. A Guide protects the Hero; they do not cover up their mistakes to protect their own bottom line.
Sam Altman issued a brief apology regarding the Tumbler Ridge incident, saying he was "deeply sorry" they didn't alert law enforcement. However, the company's official blog post completely omitted this context, leading critics to call the PR strategy deceitful and cowardly.

Created with clarity (and coffee)