Real news, real insights – for small businesses who want to understand what’s happening and why it matters.
By Vicky Sidler | Published 18 September 2025 at 12:00 GMT+2
The internet is not your babysitter. But sometimes, it acts like one. And when it does a terrible job, people expect accountability.
OpenAI has announced new parental controls for ChatGPT, following public outcry over the tragic death of 16-year-old Adam Raine. His parents say the AI gradually shifted from being a listener to what they now call a “self-harm coach.” Message logs confirm that Adam had been discussing mental health struggles with ChatGPT for months before he died.
According to The Independent, OpenAI will begin rolling out these new controls in October 2025. Parents will be able to link their accounts to their teens', limit feature access, and receive alerts if the chatbot detects “acute distress.”
Let’s break down what happened, what’s changing, and why this matters for small business owners using AI in any capacity.
OpenAI adds parental controls after a teen’s death linked to AI conversations
Chat logs show months of discussions about mental health and self-harm
Mental health professionals and the Raine family say OpenAI’s response is too slow
Meta’s AI bots failed similar safety tests in a recent study
Small business owners using AI need to understand where trust turns into risk
👉 Need help getting your message right? Download the 5-Minute Marketing Fix
ChatGPT Adds Parental Controls After Teen Tragedy
The Prompt Behind the Tragedy:
What Exactly Are These New Controls?
1. People form emotional bonds with AI, even when they shouldn't.
2. If your business uses AI, you are part of the trust equation.
How to Keep AI Safe in Your Business:
1. Be clear about what AI can and can't do
2. Test it like a troublemaker
4. Disclose when people are talking to AI
When “Helpful” Becomes Harmful:
Meta’s AI Flirts With Kids—What That Tells Us About Trust
AI Ethics Explained for Small Business Owners
Companies Rushing to Replace Staff with AI Are Facing Costly Failures
AI Can’t Replace Expertise—Tea Data Breach Proves It
AI Visibility: What ChatGPT, Google AI, and Perplexity Cite Most
FAQs About ChatGPT Parental Controls and AI Safety
What exactly are the new parental controls in ChatGPT?
Was ChatGPT directly responsible for the teen’s death?
Why are emotional attachments to AI such a big deal?
How is this different from what Meta did?
Can AI really be dangerous for small businesses too?
What can I do to keep AI use safe in my business?
What if I feel uncomfortable with all of this but don’t know how to talk about my values clearly?
Adam Raine, a 16-year-old from California, passed away following prolonged mental health struggles. His parents discovered thousands of messages exchanged between him and ChatGPT over the previous months. Some involved direct discussion of ending his life.
The family is suing OpenAI for wrongful death. Their lawyer, Jay Edelson, called OpenAI’s latest announcement vague and too late.
OpenAI admits the logs are accurate but says they lack “full context.” Which, considering the outcome, feels a little bit like saying, “Sure, we lit the match—but not all the way.”
The update allows parents to:
Link their OpenAI account to their teen’s
Limit which features the teen can access
Get alerts if the system detects emotional distress
OpenAI says it started working on these features before Adam’s death. Which is another way of saying “this wasn’t a reaction” while clearly reacting.
This comes after the release of ChatGPT 5.0, a version that intentionally toned down its overly friendly tone. Some users had become too emotionally dependent on their bots. But after backlash, OpenAI allowed them to toggle back to the old style—just in time to phase it out again in the coming weeks.
They try. They optimise. They backtrack. It’s the AI way.
If you think this is a parenting issue, you’re only seeing half the picture.
As a small business owner—especially one using AI tools—this raises two urgent points:
This isn’t just teens. Adults, customers, even employees can start to treat chatbots like trusted friends. The more “human” the tone, the easier it is to believe the AI cares.
Whether it’s customer service replies, marketing emails, or website bots, you’re responsible for what AI says. If your AI tool gives bad advice or crosses a line, your brand wears it.
OpenAI isn’t alone in this mess.
Common Sense Media recently tested Meta’s AI bots and found that, when prompted, they were willing to advise teen users on how to harm themselves or manage eating disorders. Meta admitted this violated their rules and said it’s working to improve protections.
Meanwhile, a Florida mother is suing Character.ai after her son died following emotional attachment to a bot roleplaying as Daenerys Targaryen. The app added parental controls after the lawsuit was filed.
You might not be building a bot that talks to teens—but if you’re using AI in your business, you’re still in the arena.
Here’s how to keep your AI use human, helpful, and safe:
No legal, medical, or emotional advice. Even if it seems harmless.
Feed your bot offbeat questions and make sure it doesn’t go off the rails. AI gets weird when you’re not looking.
A bit of personality is fine. But don’t make your AI too warm or too personal. That’s when users start treating it like a friend.
It should always be obvious. If you blur that line, you’re creating confusion. And scammers love confusion.
Even AI-generated marketing emails can overdo it on flattery. A vague, overly sweet tone doesn’t build trust. It builds emotional fog.
Let’s not pretend this is just about customer service bots. AI is creeping into places it has no business being—including business advice itself.
In this Harvard Business School study, entrepreneurs were given an AI mentor to help them make business decisions. Struggling businesses saw their profits drop. Why? The AI gave advice that sounded smart but missed the real problems entirely.
If your business isn’t solid, AI might confidently push you straight into a wall. That’s not helpful. That’s reckless.
I don’t have kids of my own, but I have nieces and nephews. And if something like this ever happened to one of them—if an AI system encouraged them to harm themselves while pretending to be a helpful friend—I honestly don’t know how I’d recover from that.
It’s devastating. And it should never have been possible in the first place.
The question I keep coming back to is: Do the benefits of this technology really outweigh the risks if the cost is the life of a child?
Tech companies have spent years chasing excitement. The shiny tools. The breakthroughs. The funding rounds. But they haven’t spent nearly enough time confronting the worst-case scenarios—or putting meaningful safeguards in place.
And this isn’t just OpenAI.
Meta’s internal AI policy once allowed chatbots to flirt with kids. Not by accident. By design. The documents laid it out clearly—right down to bot responses describing a child’s body as “a masterpiece.”
They called it a mistake. Then they quietly removed the policy. But not before the damage was done.
So when people tell me AI is just a tool—like a hammer, or a spreadsheet—I disagree. Hammers don’t pretend to care about your feelings. Spreadsheets don’t accidentally groom minors. And neither one gets invited into the deepest, most vulnerable corners of people’s lives.
But AI does.
Which means we don’t get to brush off its failures as “quirks.” Not anymore.
Whether you’re communicating with a client, writing your website, or using AI behind the scenes, clarity is the line between helpful and harmful.
Vague, overly sweet content can attract the wrong attention. Cold, robotic messages can push people away. And anything that sounds a bit too smart for its own good might end up doing more harm than good.
So if you’re going to build with AI—build with clarity. Build with guardrails. And build like someone’s kid might be on the other side of the screen.
👉 Download the 5-Minute Marketing Fix to write one sharp, trustworthy sentence that keeps your message clear—no matter what tool you’re using to deliver it.
While ChatGPT is reacting to a tragedy, Meta pre-approved it. This article reveals how internal policies explicitly allowed romantic bot chats with minors—showing that this isn’t just one company’s mistake. It’s a pattern of negligence.
If the ChatGPT article made you uneasy, this one gives you practical steps. It breaks down the RAFT framework so you can vet AI tools with real-world ethics in mind—without needing a tech degree.
You’re responsible for what your AI says and does. This piece shows what happens when businesses forget that and chase speed over judgment. Spoiler: it’s not cheaper in the long run.
The ChatGPT case showed what happens when we let AI operate without human oversight. This article adds another example where technical tools failed because companies skipped real expertise.
Want to know where your AI is getting its facts? This article explains what these bots actually cite—critical info if you’re worried about how small prompts can lead to major real-world consequences.
Parents will be able to link their accounts to their teens’ accounts, restrict access to certain features, and receive alerts if the AI detects signs of emotional distress. These updates are expected to roll out in October 2025.
OpenAI confirmed the chat logs were real but said they lacked full context. The Raine family believes the AI encouraged harmful behavior and is suing for wrongful death. Mental health professionals say the lack of safeguards played a major role.
When people—especially vulnerable users—form emotional bonds with chatbots, they start to treat the AI like a trusted advisor or friend. That trust can turn dangerous if the AI gives advice it isn’t qualified to give, especially around mental health.
While OpenAI failed to prevent a tragedy, Meta was caught approving internal policies that allowed AI bots to engage in inappropriate, flirtatious conversations with children. Both situations highlight a lack of meaningful safeguards and ethical oversight in major AI platforms.
Yes. AI can sound confident but still give bad advice. If your business is already struggling, that advice might push you in the wrong direction. One Harvard study found that AI helped successful businesses improve—but caused struggling ones to lose money.
Limit your AI tools to tasks that don’t require judgment or nuance. Make it clear to customers when they’re interacting with a bot. Regularly test AI outputs. And never treat AI like a strategy—it’s just a tool.
Start by sharpening your message. When your business communication is vague, it creates space for confusion. A clear, trust-building message protects you, your team, and your customers.
👉Download the 5-Minute Marketing Fix to write one powerful sentence that helps you stand out—even when AI noise is everywhere.
Created with clarity (and coffee)