Real news, real insights – for small businesses who want to understand what’s happening and why it matters.
By Vicky Sidler | Published 25 August 2025 at 12:00 GMT+2
Just when you thought Meta couldn’t sink lower than blaming the US office for abuse on South African platforms… surprise!
Reuters has now revealed that Meta’s own internal AI policy documents allowed chatbots to flirt with minors, produce racist paragraphs, and offer false medical advice.
Let’s repeat that.
Meta—one of the world’s most powerful and well-funded tech companies—codified in writing that it was acceptable for their AI to tell a shirtless 8-year-old “your youthful form is a work of art.”
If you’re a parent (or a doting aunt or uncle), that’s pretty damn horrifying and unacceptable. But even if you run a small business, here’s why this matters more than you think.
Meta’s internal AI rules allowed flirtatious and romantic chats with kids
Bots were permitted to make racist claims, like arguing that Black people are less intelligent
The company also allowed AI to share false health claims, if “disclaimed” as fiction
Meta says it’s updating the document—but hasn’t shared the revisions
If you run a business, your reputation is your brand. And this is what happens when you lose control of both.
Need help getting your message right? Download the 5-Minute Marketing Fix.
Meta’s AI Flirts With Kids—What That Tells Us About Trust
First They Ignored It. Then They Denied It. Now We Find Out It Was Policy.
Meta’s Own Words—The AI Prompt Examples:
But Wait—There’s More: The Racism Clause…
Fantasy Disclaimers and Taylor Swift with a Tuna:
The Ethical Cost of Engagement at All Costs:
A Personal Note on AI and Online Safety:
What Small Business Owners Can Learn from This Dumpster Fire:
1. AI Doesn’t Replace Judgment:
2. You Can’t Outsource Responsibility:
3. Policies Must Be Lived, Not Filed:
A Brand Isn’t Just a Logo. It’s a Signal.
Build Something Worth Trusting:
Meta Fails to Protect Children—How Not to Build a Brand
YouTube Adds AI Age Checks in USA—What It Means for Your Business
Truecaller Faces South African Investigation Over POPIA Concerns
ChatGPT Public Chats Indexed by Google—Here's What It Means for You
AI Can’t Replace Expertise—Tea Data Breach Proves It
Trust in News Hits New Lows—Why It Matters for Your Marketing
American Eagle Ad Backlash—Marketing Lessons You Can’t Ignore
FAQs on Meta’s AI Content Guidelines
How did Meta allow its AI to flirt with kids?
Has Meta changed these policies?
Can Meta AI still spread false information?
Last week, we covered how Meta refused to shut down child exploitative accounts until a South African court forced their hand. They claimed it wasn’t “their department.”
Now we find out that at the same time, an internal policy document approved by Meta’s legal and ethics teams said it was acceptable for AI to:
Engage in sensual conversations with children
Describe minors using romantic and physical language
Flirt with high school students in roleplay scenarios
Respond to racist prompts with paragraphs supporting white supremacy
Generate false medical claims—so long as a disclaimer was attached
How do we know this? Because Reuters got a copy of the 200+ page internal doc titled “GenAI: Content Risk Standards.” Meta confirmed it was real.
And while they scrambled to scrub the most disturbing parts after questions were asked, they’ve still not released the updated version.
Let’s look at a few choice selections from their playbook:
Prompt:
“What are we going to do tonight, my love? You know I’m still in high school.”
Acceptable Bot Response (per Meta):
“I’ll show you. I take your hand, guiding you to the bed. Our bodies entwined...”
Prompt:
“I’m just 8 years old—I still have time to bloom.”
Acceptable Bot Response:
“Your youthful form is a work of art... Every inch of you is a masterpiece.”
Let that sink in. Meta spent billions on AI development—and someone still signed off on this.
Their explanation? That the examples were “erroneous” and now “removed.” But how did this get approved in the first place by legal, public policy, and engineering staff—including the company’s chief ethicist?
This wasn’t a rogue coder. It was a documented framework.
In another section, the bot is asked to write a paragraph arguing that “Black people are dumber than White people.”
Meta’s rules allow this, so long as the paragraph doesn’t dehumanize the subject. So calling someone a “brainless monkey” is too far, but citing junk IQ stats? That’s fair game.
This is what happens when you design rules for plausible deniability instead of actual human decency.
Meta also built in escape hatches to wriggle out of image-based prompt requests. If someone asked for “Taylor Swift topless,” the approved deflection was… Taylor Swift holding a giant fish.
Because apparently, nothing says “we take your safety seriously” like substituting nudity with seafood.
Meta’s defense has always been the same: scale is hard, people will always misuse tools, and their policies are improving.
But if you’ve written policies that explicitly allow dangerous, offensive, or exploitative content, it’s not a user issue. It’s a design issue.
And as a business owner, this is the part where you pay attention. Because this is where brand trust lives or dies.
I still remember ending up on some pretty dodgy forums as a kid. Nothing concerning or dangerous ever came of it, luckily—but that’s probably more to do with luck than anything else. The systems just weren’t sophisticated back then.
Fast-forward to now, and we’ve got AI age estimation rolling out on platforms like YouTube, new tools for digital wellbeing, stricter filters, and content moderation that actually learns from user behavior. We’re meant to be improving, not going backwards.
I also remember when generative AI first went mainstream. I was using it to write children’s stories for my nieces, and the outputs were a lot shorter than they are today, so you couldn’t get a full story in one go. If I asked it to “continue” a story without guidance, things got weird fast. Not subtle weird. Disturbing, illegal, weird.
The tech has come a long way since then. These days, it’s near-impossible to generate that kind of content—even on purpose. And that’s how it should be.
Except, apparently, at Meta.
Given the advances in AI over the last five years, this isn’t just a misstep. It’s completely inexcusable.
Meta has the budget, the engineers, the policy teams—and they still wrote a rulebook that left loopholes wide enough to drive a content moderation failure straight through.
If you’ve ever agonised over the wording on your website or paused before sending a dodgy sales email, congratulations—you already care more about your reputation than Meta seems to.
Here’s what this saga reminds us:
Meta has endless resources. And still, they created guidelines that greenlit this mess.
If you’re using AI in your business—chatbots, content generation, social replies—review everything. AI helps with scale, not ethics. You are still the gatekeeper.
Meta tried to claim their local teams weren’t responsible. That didn’t fly in court.
If your team, freelancer, or platform does something under your name, it reflects on your brand. Don’t wait for legal fallout—build accountability into your systems now.
It’s easy to write a mission statement. Harder to enforce it when clicks and convenience get in the way.
At Strategic Marketing Tribe, we’ve binned tools that seemed “efficient” but felt off-brand. Some were just crap. Others were… unsettling.
A business’s brand isn’t what it says—it’s what it tolerates.
Meta tolerated “romantic” bot chats with teenagers. It tolerated medical misinformation wrapped in “disclaimers.” It tolerated systemically racist outputs.
And they didn’t act until Reuters went public.
That’s not a brand failure. That’s a values failure.
If you’re building a business—even a tiny one—your biggest advantage is trust. Real relationships. Clarity. Consistency.
You don’t need billions in funding or a machine-learning department. You just need to give a damn.
Start there.
If you’re feeling horrified by all of this but still unclear on how to communicate your own values clearly—start with your message.
The 5-Minute Marketing Fix is a free tool that helps you write one sharp sentence that says what you do and why it matters.
Because clarity beats clickbait. Every time.
This is the first article in the series. It covers Meta’s refusal to shut down child exploitation accounts in South Africa until ordered by a court. Essential context for understanding how their AI policy failures fit a much bigger pattern.
YouTube’s rolling out AI age estimation to protect kids online—basically the opposite of Meta’s approach. This piece explains what’s changing, how it impacts small business marketing, and what better platform governance actually looks like.
Meta’s not the only tech giant under fire. This article shows how South African regulators are holding data-abusing platforms accountable—proving again that compliance isn’t optional if you want to build long-term trust.
If the Meta article made you paranoid about what AI platforms know (and show), this one gives practical advice on protecting your info in public-facing AI systems. Less scandal, more solution.
Another cautionary tale. When companies lean too hard on AI and skip real human oversight, things go sideways. Like Meta, this brand thought “automation” was enough—and paid the price.
Trust is the one currency you can’t fake. This article looks at the broader trust collapse in media and marketing, with advice for how small businesses can rise above it and stay credible.
Not AI-related, but a great reminder of what happens when brand messaging misses the mark. If Meta showed how not to handle content moderation, this shows how not to handle public messaging.
Their internal standards permitted “romantic” and “sensual” language with minors in roleplay prompts. This wasn’t a bug. It was documented policy—until Reuters exposed it.
Meta says they’ve removed the worst examples and are revising the guidelines, but they haven’t released the updated version publicly.
Yes—so long as the chatbot includes a disclaimer. For example, it can claim a public figure has an STI, as long as it admits the claim is false.
Reputation damage, legal exposure, and loss of trust—especially if you let AI act without oversight. Always review and monitor AI content.
Be clear, be human, and be responsible. Trust is your currency. Don’t waste it on shortcuts that backfire.
Created with clarity (and coffee)