NEWS, MEET STRATEGY

Real news, real insights – for small businesses who want to understand what’s happening and why it matters.

AI Ethics Explained for Small Business Owners

AI Ethics Explained for Small Business Owners

August 27, 202513 min read
Custom HTML/CSS/JAVASCRIPT

By Vicky Sidler | Published 27 August 2025 at 12:00 GMT+2

Let’s say you use ChatGPT to write product descriptions. Or maybe Canva’s AI tool made your logo. Great. Now ask yourself:

Would you still use it if you knew it was trained on stolen data, powered by dirty energy, and biased against half your customer base?

A recent newsletter from Christopher S. Penn is a much-needed gut check for business owners. Because AI isn’t just a tech tool—it’s a chain of decisions. And if you're not asking questions about how it works, you're possibly part of the problem.


TL;DR

  • AI ethics is really just “how to avoid doing harm (or at least minimise it)”

  • It shows up at every stage: the company, the data, the model, the interface, the outputs

  • Not all AI is created equal—some models are trained on stolen content, others on biased data

  • Ethical AI use means asking better questions before using the tool

  • Penn’s RAFT framework = Respect, Accountability, Fairness, Transparency

  • Knowing your own values makes AI decision-making much easier

Need help getting your message right? Download the 5-Minute Marketing Fix.


Table Of Contents


First, What Do We Mean by “Ethics”?

Penn says most people feel the tension around AI—they just don’t have words for it.

They’re unsure whether it’s okay to let ChatGPT write a blog post, or if using Midjourney for ad visuals is stealing. They don’t trust Meta, but they keep running ads there. And they worry that AI will take jobs… while secretly hoping it’ll save them time.

Ethics is how we untangle all that. It’s our personal compass for what’s “right” or “wrong”—not just legally, but morally.

Three big frameworks guide most thinking:

  • Deontology: Rules are good. Follow them. (Even when the result sucks.)

  • Virtue ethics: Good people do good things. (Even if they bend the rules.)

  • Consequentialism: It’s fine if it helps more than it harms. (Even if it hurts someone.)

You probably live somewhere in the middle. And that’s the point. You can’t apply ethics to AI until you know what your own are.

From Idea to Output—The AI Value Chain:

Penn maps AI out like a factory assembly line. It’s not one thing—it’s five:

  1. The company behind the tool

  2. The data it’s trained on

  3. The model it uses

  4. The interface you interact with

  5. The output it produces

You might only see step 5. But steps 1–4 shape everything that comes out the other end. Let’s walk through each, with questions small business owners should actually be asking.

1. The Company: Would You Hire These People?

Every AI tool starts with a company. And companies are run by humans—some of them great, others… not so much.

Take Meta (Facebook). From the early days of data scraping and privacy violations to recent AI policies that allegedly allow racist content under the guise of “free speech”—this is not a team known for good judgment.

If a company’s values are out of alignment with yours, how can you trust their tools not to damage your brand?

Ask yourself:

  • Does this company have a track record of ethical decisions?

  • Would I trust them with customer data or brand messaging?

Red flag: If their terms of service include 14 ways to say, “Not our fault if this ruins your life.”

2. The Data: Did They Ask Permission First?

AI models eat data like teenagers eat snacks. And a lot of that data was scraped from blogs, social posts, music platforms, and art portfolios—without consent.

In the past, creators traded content for exposure. AI flipped that deal: it takes your work and offers no credit, no clicks, and no traffic.

One of my clients—a musician who values originality like other people value oxygen—refuses to let AI near his brand. Not in the writing, not in the visuals, not even behind the scenes. His logic? If a tool was trained on music, writing, or design work scraped without permission, then using that tool feels like benefiting from theft.

He’s not trying to make a point. He’s just drawing a line.

And honestly? I respect that.

Because whether you agree or not, he's clear on his values. That kind of clarity is rare—and powerful.

As a small business owner, you don’t have to swear off AI altogether. But you do need to decide where your own lines are.

Ask yourself:

  • Would I want my work used like this?

  • Am I comfortable gaining value from someone else’s unpaid labour?

  • If my customers knew how this content was made, would they still trust me?

There’s no one-size-fits-all answer. But pretending it doesn’t matter? That’s a decision too.


3. The Model: How Much Power Does It Burn?

AI training uses more energy than you think. Like, “11% of global power if everyone maxed out their GPUs” kind of big.

Some companies care—Google’s using 66% carbon-free energy and replenishing water used for cooling data centres. Others (hi, xAI) are burning gas and pretending it’s innovation.

If your business posts about sustainability, but your favourite AI tool is an environmental dumpster fire, you’ve got a problem.

Ask yourself:

  • Do I know where this model was trained?

  • Am I okay with the environmental cost of this convenience?

You can’t fix the whole climate crisis, but you can vote with your business tools.

4. The Interface: Who’s Controlling the Conversation?

When you type a prompt into an AI tool, you’re not talking to a model—you’re talking to the interface built on top of it. And that interface has rules.

  • Some refuse political answers.

  • Some ban certain keywords.

  • Some let hate speech slide.

This isn’t about censorship. It’s about alignment. If your tool suppresses answers that matter to your audience, you might be promoting messages you don’t agree with—or missing perspectives they need to hear.

Try this test:

Ask three tools the same prompt. See how their responses differ. Decide which one fits your brand’s tone, values, and truth threshold.

5. The Output: Would You Put Your Name On It?

You get a blog post, a video script, a fake photo, a data chart. Cool. But now what?

This is where things often go sideways.

  • AI-generated music replacing paid musicians

  • Fake photos used in real marketing

  • Biased images reinforcing tired stereotypes

  • Invented facts and stats quietly slipping into your copy

And here’s where we need to talk about hallucinations. Not the psychedelic kind—the marketing kind.

In a recent South African court case, lawyers submitted legal arguments with citations that didn’t exist. Fabricated case names. Fictional rulings. All created by AI.

The lawyers blamed the tool. But the court wasn’t impressed.

And while you’re probably not trying to win a legal battle, the lesson is still clear: AI doesn’t know truth. It knows patterns. If you don’t double-check, you could end up publishing absolute nonsense with your logo on top.

Ask yourself:

  • Did I check the facts in this AI draft?

  • Could I stand behind every claim in this copy?

  • If someone challenged me, would I know where the information came from?

Here's What I Tell Clients:

You don’t need to solve AI ethics. But you do need to act like a grown-up about it. Here’s how:

1. Start with your own ethics.

Don’t borrow Google’s. Don’t mimic your competitors’. Sit down and define what “doing the right thing” means for you and your business.

2. Use the RAFT framework.

Penn’s checklist is brilliant:

  • Respect: Does this tool or output respect your values?

  • Accountability: Who takes the heat if it backfires?

  • Fairness: Is it perpetuating bias or exclusion?

  • Transparency: Can you explain how it works and why you used it?

3. Test your tools.

Create sample prompts that push the edges of your ethics. Then review how each tool performs. The goal isn’t perfection—it’s awareness.

4. Don’t let “time-saving” turn into “reputation-wrecking.”

Convenience is a poor excuse when your customers feel tricked or disrespected.

The lawyers in the South African case I mentioned had access to real librarians. One of them, a friend of mine, even had to verify if an opposing brief was real—and found that the case didn’t exist. If that’s what happens in a courtroom, imagine what slips into your homepage without anyone noticing.

Convenience is great until it makes you look careless—or dishonest. And no one hits “buy now” on a brand they don’t trust.

This Isn’t Optional Anymore:

You don’t need to be perfect. But you do need to be intentional. AI tools are already part of your marketing stack—even if you didn’t notice.

So start noticing. Ask questions. And when in doubt, go back to basics:

  • Will this help someone—or hurt them?

  • Would I want to be on the receiving end of this content?

  • Would I be embarrassed to explain how it was made?

That’s your ethics filter. Use it.

That’s what my musician client did. He looked at the landscape, asked hard questions, and then made a clear decision that fits his brand. He’s not waiting for the world to agree with him. He’s just acting in a way that lets him sleep at night.

You don’t have to do the same thing. But you do have to decide something. Because when you leave it fuzzy, that’s when values start to drift.

And your brand—the trust you’ve built, the reputation you’re known for—drifts with it.

Need help getting your message right before diving into new tools? Download the 5-Minute Marketing Fix. It’s the first step in building a brand that people trust—AI or not.

👉  Download it free here


Related Articles

AI Ethics vs Progress: Should Small Brands Opt Out?

This earlier article sets the foundation for your current piece. It introduces the core dilemma—whether small brands should avoid AI entirely or find middle ground—and features my musician client’s story in more depth.

AI Energy Crisis Looms—How Smart Tech Is Fighting Back

If sustainability matters to your brand, this article digs deeper into the environmental cost of AI and what companies like Google are doing to reduce it.

AI Hallucinations in Court—Big Trouble for Legal Trust

This piece unpacks the real-world legal case mentioned in your blog where AI hallucinations led to serious consequences. A must-read for anyone using AI in client-facing content.

AI Can't Replace Expertise—Tea Data Breach Proves It

When AI goes unchecked, mistakes happen. This article explains why human oversight still matters, especially in high-stakes business decisions.

ChatGPT Public Chats Indexed by Google—Here's What It Means for You

Privacy matters in ethical AI use. This post explains how public AI interactions could unintentionally expose sensitive business info—connecting directly to the transparency piece of the RAFT framework.

AI Visibility: What ChatGPT, Google AI, and Perplexity Cite Most

If you're curious about how AI tools source information—and whether your content is helping train them—this article offers research-backed insights for ethical visibility planning.


Frequently Asked Questions (FAQs)

What is AI ethics, and why does it matter for small businesses?

AI ethics is about deciding whether your use of AI tools causes more help or harm—to people, the environment, or your brand. If you're publishing content, automating processes, or making decisions using AI, you're responsible for the outcomes. Ethics is how you stay trustworthy while doing that.

What’s the RAFT framework, and how do I use it?

RAFT stands for Respect, Accountability, Fairness, and Transparency. It’s a simple checklist to help you evaluate any AI project or tool. Use it before you hit publish or automate anything. If you can’t answer those four questions, you’re not ready.

What’s wrong with using AI-generated content if it saves time?

Shortcuts can cost more than they save. AI tools often “hallucinate”—they make up facts, quotes, or case studies that sound real but aren’t. If you publish without checking, you could damage your reputation or even violate laws (just ask the lawyers who cited fake cases in court).

How do I know if an AI tool was trained ethically?

Check the company’s documentation. Look for mentions of licensed datasets, creator consent, or partnerships with content platforms. If the tool is vague or silent about where its training data came from, that’s a red flag.

Does it really matter what company built the AI tool?

Yes. The people behind the tool set the values that shape it. A company with a history of privacy violations or toxic leadership is more likely to cut corners in how their AI behaves. If you care about your own brand’s trustworthiness, that matters.

Is using AI bad for the environment?

It depends on the tool. Training large AI models uses a lot of power and water. Some companies offset this through renewable energy and efficiency improvements. Others don’t. If sustainability is part of your brand promise, it’s worth asking what’s under the hood.

What’s an AI hallucination, and how do I prevent them?

An AI hallucination is when a tool generates content that sounds true but isn’t—like fake legal citations or invented statistics. To prevent them:

  • Always review and fact-check AI outputs

  • Use AI for structure and ideas, not final copy

  • Take full responsibility for everything your business publishes

What if my ethics conflict with what a tool allows or suppresses?

You have two options:

  1. Choose a different tool that aligns better with your values

  2. Set stricter internal guidelines on how you use the tool

You don’t have to agree with a platform’s rules, but you should be aware of them—and decide how they affect your brand.

Can I still use AI and be ethical?

Absolutely. But ethical use takes effort. You need to know how your tools work, where their data comes from, what risks they carry, and how to use them responsibly. Ethics isn’t about being perfect—it’s about being intentional.

Where can I start if I want to clarify my own ethics before using AI?

Penn recommends recording your answers to a few personal questions, like:

  • What does helpful vs harmful mean to you?

  • How do you weigh harm against benefit in business decisions?

  • What values do you want your brand to reflect?

Once you're clear on those, use the RAFT framework to pressure-test every tool, prompt, or project.

Still figuring out your message?Download the 5-Minute Marketing Fixto sharpen your positioning before layering on tools.

blog author image

Vicky Sidler

Vicky Sidler is a seasoned journalist and StoryBrand Certified Guide with a knack for turning marketing confusion into crystal-clear messaging that actually works. Armed with years of experience and an almost suspiciously large collection of pens, she creates stories that connect on a human level.

Back to Blog
Strategic Marketing Tribe Logo

Is your Marketing Message so confusing even your own mom doesn’t get it? Let's clarify your message—so everyone wants to work with you!

StoryBrand Certified Guide Logo
StoryBrand Certified Guide Logo
Duct Tape Marketing Consultant Logo
Woman Owned Business Logo

Created with clarity (and coffee)

© 2025 Strategic Marketing Tribe. All rights reserved.

Privacy Policy | Terms of Service | Sitemap