Real news, real insights – for small businesses who want to understand what’s happening and why it matters.
By Vicky Sidler | Published 27 August 2025 at 12:00 GMT+2
Let’s say you use ChatGPT to write product descriptions. Or maybe Canva’s AI tool made your logo. Great. Now ask yourself:
Would you still use it if you knew it was trained on stolen data, powered by dirty energy, and biased against half your customer base?
A recent newsletter from Christopher S. Penn is a much-needed gut check for business owners. Because AI isn’t just a tech tool—it’s a chain of decisions. And if you're not asking questions about how it works, you're possibly part of the problem.
AI ethics is really just “how to avoid doing harm (or at least minimise it)”
It shows up at every stage: the company, the data, the model, the interface, the outputs
Not all AI is created equal—some models are trained on stolen content, others on biased data
Ethical AI use means asking better questions before using the tool
Penn’s RAFT framework = Respect, Accountability, Fairness, Transparency
Knowing your own values makes AI decision-making much easier
Need help getting your message right? Download the 5-Minute Marketing Fix.
AI Ethics Explained for Small Business Owners
First, What Do We Mean by “Ethics”?
From Idea to Output—The AI Value Chain:
1. The Company: Would You Hire These People?
2. The Data: Did They Ask Permission First?
3. The Model: How Much Power Does It Burn?
4. The Interface: Who’s Controlling the Conversation?
5. The Output: Would You Put Your Name On It?
1. Start with your own ethics.
4. Don’t let “time-saving” turn into “reputation-wrecking.”
AI Ethics vs Progress: Should Small Brands Opt Out?
AI Energy Crisis Looms—How Smart Tech Is Fighting Back
AI Hallucinations in Court—Big Trouble for Legal Trust
AI Can't Replace Expertise—Tea Data Breach Proves It
ChatGPT Public Chats Indexed by Google—Here's What It Means for You
AI Visibility: What ChatGPT, Google AI, and Perplexity Cite Most
Frequently Asked Questions (FAQs)
What is AI ethics, and why does it matter for small businesses?
What’s the RAFT framework, and how do I use it?
What’s wrong with using AI-generated content if it saves time?
How do I know if an AI tool was trained ethically?
Does it really matter what company built the AI tool?
Is using AI bad for the environment?
What’s an AI hallucination, and how do I prevent them?
What if my ethics conflict with what a tool allows or suppresses?
Can I still use AI and be ethical?
Where can I start if I want to clarify my own ethics before using AI?
Penn says most people feel the tension around AI—they just don’t have words for it.
They’re unsure whether it’s okay to let ChatGPT write a blog post, or if using Midjourney for ad visuals is stealing. They don’t trust Meta, but they keep running ads there. And they worry that AI will take jobs… while secretly hoping it’ll save them time.
Ethics is how we untangle all that. It’s our personal compass for what’s “right” or “wrong”—not just legally, but morally.
Three big frameworks guide most thinking:
Deontology: Rules are good. Follow them. (Even when the result sucks.)
Virtue ethics: Good people do good things. (Even if they bend the rules.)
Consequentialism: It’s fine if it helps more than it harms. (Even if it hurts someone.)
You probably live somewhere in the middle. And that’s the point. You can’t apply ethics to AI until you know what your own are.
Penn maps AI out like a factory assembly line. It’s not one thing—it’s five:
The company behind the tool
The data it’s trained on
The model it uses
The interface you interact with
The output it produces
You might only see step 5. But steps 1–4 shape everything that comes out the other end. Let’s walk through each, with questions small business owners should actually be asking.
Every AI tool starts with a company. And companies are run by humans—some of them great, others… not so much.
Take Meta (Facebook). From the early days of data scraping and privacy violations to recent AI policies that allegedly allow racist content under the guise of “free speech”—this is not a team known for good judgment.
If a company’s values are out of alignment with yours, how can you trust their tools not to damage your brand?
Ask yourself:
Does this company have a track record of ethical decisions?
Would I trust them with customer data or brand messaging?
Red flag: If their terms of service include 14 ways to say, “Not our fault if this ruins your life.”
AI models eat data like teenagers eat snacks. And a lot of that data was scraped from blogs, social posts, music platforms, and art portfolios—without consent.
In the past, creators traded content for exposure. AI flipped that deal: it takes your work and offers no credit, no clicks, and no traffic.
One of my clients—a musician who values originality like other people value oxygen—refuses to let AI near his brand. Not in the writing, not in the visuals, not even behind the scenes. His logic? If a tool was trained on music, writing, or design work scraped without permission, then using that tool feels like benefiting from theft.
He’s not trying to make a point. He’s just drawing a line.
And honestly? I respect that.
Because whether you agree or not, he's clear on his values. That kind of clarity is rare—and powerful.
As a small business owner, you don’t have to swear off AI altogether. But you do need to decide where your own lines are.
Ask yourself:
Would I want my work used like this?
Am I comfortable gaining value from someone else’s unpaid labour?
If my customers knew how this content was made, would they still trust me?
There’s no one-size-fits-all answer. But pretending it doesn’t matter? That’s a decision too.
AI training uses more energy than you think. Like, “11% of global power if everyone maxed out their GPUs” kind of big.
Some companies care—Google’s using 66% carbon-free energy and replenishing water used for cooling data centres. Others (hi, xAI) are burning gas and pretending it’s innovation.
If your business posts about sustainability, but your favourite AI tool is an environmental dumpster fire, you’ve got a problem.
Ask yourself:
Do I know where this model was trained?
Am I okay with the environmental cost of this convenience?
You can’t fix the whole climate crisis, but you can vote with your business tools.
When you type a prompt into an AI tool, you’re not talking to a model—you’re talking to the interface built on top of it. And that interface has rules.
Some refuse political answers.
Some ban certain keywords.
Some let hate speech slide.
This isn’t about censorship. It’s about alignment. If your tool suppresses answers that matter to your audience, you might be promoting messages you don’t agree with—or missing perspectives they need to hear.
Try this test:
Ask three tools the same prompt. See how their responses differ. Decide which one fits your brand’s tone, values, and truth threshold.
You get a blog post, a video script, a fake photo, a data chart. Cool. But now what?
This is where things often go sideways.
AI-generated music replacing paid musicians
Fake photos used in real marketing
Biased images reinforcing tired stereotypes
Invented facts and stats quietly slipping into your copy
And here’s where we need to talk about hallucinations. Not the psychedelic kind—the marketing kind.
In a recent South African court case, lawyers submitted legal arguments with citations that didn’t exist. Fabricated case names. Fictional rulings. All created by AI.
The lawyers blamed the tool. But the court wasn’t impressed.
And while you’re probably not trying to win a legal battle, the lesson is still clear: AI doesn’t know truth. It knows patterns. If you don’t double-check, you could end up publishing absolute nonsense with your logo on top.
Ask yourself:
Did I check the facts in this AI draft?
Could I stand behind every claim in this copy?
If someone challenged me, would I know where the information came from?
You don’t need to solve AI ethics. But you do need to act like a grown-up about it. Here’s how:
Don’t borrow Google’s. Don’t mimic your competitors’. Sit down and define what “doing the right thing” means for you and your business.
Penn’s checklist is brilliant:
Respect: Does this tool or output respect your values?
Accountability: Who takes the heat if it backfires?
Fairness: Is it perpetuating bias or exclusion?
Transparency: Can you explain how it works and why you used it?
Create sample prompts that push the edges of your ethics. Then review how each tool performs. The goal isn’t perfection—it’s awareness.
Convenience is a poor excuse when your customers feel tricked or disrespected.
The lawyers in the South African case I mentioned had access to real librarians. One of them, a friend of mine, even had to verify if an opposing brief was real—and found that the case didn’t exist. If that’s what happens in a courtroom, imagine what slips into your homepage without anyone noticing.
Convenience is great until it makes you look careless—or dishonest. And no one hits “buy now” on a brand they don’t trust.
You don’t need to be perfect. But you do need to be intentional. AI tools are already part of your marketing stack—even if you didn’t notice.
So start noticing. Ask questions. And when in doubt, go back to basics:
Will this help someone—or hurt them?
Would I want to be on the receiving end of this content?
Would I be embarrassed to explain how it was made?
That’s your ethics filter. Use it.
That’s what my musician client did. He looked at the landscape, asked hard questions, and then made a clear decision that fits his brand. He’s not waiting for the world to agree with him. He’s just acting in a way that lets him sleep at night.
You don’t have to do the same thing. But you do have to decide something. Because when you leave it fuzzy, that’s when values start to drift.
And your brand—the trust you’ve built, the reputation you’re known for—drifts with it.
Need help getting your message right before diving into new tools? Download the 5-Minute Marketing Fix. It’s the first step in building a brand that people trust—AI or not.
This earlier article sets the foundation for your current piece. It introduces the core dilemma—whether small brands should avoid AI entirely or find middle ground—and features my musician client’s story in more depth.
If sustainability matters to your brand, this article digs deeper into the environmental cost of AI and what companies like Google are doing to reduce it.
This piece unpacks the real-world legal case mentioned in your blog where AI hallucinations led to serious consequences. A must-read for anyone using AI in client-facing content.
When AI goes unchecked, mistakes happen. This article explains why human oversight still matters, especially in high-stakes business decisions.
Privacy matters in ethical AI use. This post explains how public AI interactions could unintentionally expose sensitive business info—connecting directly to the transparency piece of the RAFT framework.
If you're curious about how AI tools source information—and whether your content is helping train them—this article offers research-backed insights for ethical visibility planning.
AI ethics is about deciding whether your use of AI tools causes more help or harm—to people, the environment, or your brand. If you're publishing content, automating processes, or making decisions using AI, you're responsible for the outcomes. Ethics is how you stay trustworthy while doing that.
RAFT stands for Respect, Accountability, Fairness, and Transparency. It’s a simple checklist to help you evaluate any AI project or tool. Use it before you hit publish or automate anything. If you can’t answer those four questions, you’re not ready.
Shortcuts can cost more than they save. AI tools often “hallucinate”—they make up facts, quotes, or case studies that sound real but aren’t. If you publish without checking, you could damage your reputation or even violate laws (just ask the lawyers who cited fake cases in court).
Check the company’s documentation. Look for mentions of licensed datasets, creator consent, or partnerships with content platforms. If the tool is vague or silent about where its training data came from, that’s a red flag.
Yes. The people behind the tool set the values that shape it. A company with a history of privacy violations or toxic leadership is more likely to cut corners in how their AI behaves. If you care about your own brand’s trustworthiness, that matters.
It depends on the tool. Training large AI models uses a lot of power and water. Some companies offset this through renewable energy and efficiency improvements. Others don’t. If sustainability is part of your brand promise, it’s worth asking what’s under the hood.
An AI hallucination is when a tool generates content that sounds true but isn’t—like fake legal citations or invented statistics. To prevent them:
Always review and fact-check AI outputs
Use AI for structure and ideas, not final copy
Take full responsibility for everything your business publishes
You have two options:
Choose a different tool that aligns better with your values
Set stricter internal guidelines on how you use the tool
You don’t have to agree with a platform’s rules, but you should be aware of them—and decide how they affect your brand.
Absolutely. But ethical use takes effort. You need to know how your tools work, where their data comes from, what risks they carry, and how to use them responsibly. Ethics isn’t about being perfect—it’s about being intentional.
Penn recommends recording your answers to a few personal questions, like:
What does helpful vs harmful mean to you?
How do you weigh harm against benefit in business decisions?
What values do you want your brand to reflect?
Once you're clear on those, use the RAFT framework to pressure-test every tool, prompt, or project.
Still figuring out your message?Download the 5-Minute Marketing Fixto sharpen your positioning before layering on tools.
Created with clarity (and coffee)