Real news, real insights – for small businesses who want to understand what’s happening and why it matters.

By Vicky Sidler | Published 29 April 2026 at 12:00 GMT+2
Have you ever wondered why otherwise "good" people suddenly find it incredibly easy to lie, cheat, and cut corners the second they have a chatbot to do the dirty work for them?
It turns out that using artificial intelligence doesn't just make us faster; it makes us significantly more sleazy. According to a massive new study involving 8,000 participants published in the journal Nature, people are far more likely to engage in unethical behavior when they can delegate those actions to an AI. Behavioral scientist Zoe Rahwan notes that AI creates a "convenient moral distance," allowing humans to request behaviors from machines that they would never have the guts to engage in themselves.
As a StoryBrand Certified Guide, I spend my time helping you position your brand as the safe, empathetic Guide that the Hero (your customer) can trust. But if your entire business model is built on "moral distance" and automated shortcuts, you aren't a Guide—you are a ghost.
Before you let an algorithm handle another sensitive customer touchpoint, we need to look at the staggering statistics of AI-driven dishonesty and why maintaining a fiercely human presence is the only way to protect your brand from the coming ethics crisis.
A study of 8,000 people found that honesty dropped from 95% to a pathetic 75% when participants used AI to report results, proving that machines act as a "moral buffer" for unethical behavior.
When given the chance to manipulate AI for profit over accuracy, 84% of people chose to cheat, demonstrating that humans are more likely to demand "evil" behavior from machines than from other people.
In the StoryBrand framework, your brand’s power comes from undeniable human trust. Outsourcing your integrity to an algorithm is a high-speed shortcut to becoming the Villain of your own story.
👉 If your marketing strategy relies on automated shortcuts that create "distance" between you and your clients, your brand trust is actively eroding. You must establish secure, undeniable authority. Download the 5-Minute Marketing Fix to craft a powerful StoryBrand One-Liner that instantly communicates your human value, so you don't have to rely on a "moral distance" algorithm to win clients.
Why AI Is Turning Your Customers Into Villains (And Killing Your StoryBrand Trust)
Why Does The Chatbot Make Cheating So Easy?
How Do You Stop Your Brand From Becoming The Villain?
Will Authentic Humanity Become Your Ultimate Moat?
1. Why ChatGPT Is Literally Boiling Your StoryBrand Brain
2. South Africa's Fake AI Policy Shows Why Your StoryBrand Must Be Human
3. Why The CEO Of OpenAI Can't Stop Lying To You
4. Company of One by Paul Jarvis Summary: Why Scaling Will Kill Your Business
5. Can We Please Stop Putting AI In Everything?
1. Does using AI actually increase cheating?
2. What is "moral distance" in AI?
3. How does this impact small business marketing?
You probably assume that your customers are inherently honest people who only want a fair deal, but the second you put a machine between you and them, the rules of human morality apparently evaporate.
The Nature study utilized a series of dice-roll experiments to measure honesty. When reporting their own numbers directly to researchers, 95% of people were honest. But when they reported those same numbers through an AI model, that figure dropped to 75%. As co-author Zoe Rahwan explained, the AI acts as a shield. It allows people to request unethical outcomes without having to physically look a human being in the eye and lie.
This "moral distance" is a total disaster for your marketing. If your business uses AI to automate support or sales, you aren't just saving time; you are creating an environment where your customers feel less "human" accountability. You are training them to treat your business like a faceless vending machine rather than a relationship-driven brand.
The absolute core of the StoryBrand framework is that the Hero (your customer) is looking for a Guide they can trust to lead them through a problem—not a machine that helps them cut corners.
In another experiment, researchers let participants set the parameters of their AI: they could either maximize for accuracy or maximize for profit. Over 84% chose the money, effectively commanding the machine to lie for them. This established a terrifying precedent: people are more likely to request unethical behavior from machines than to engage in it themselves.
If your marketing feels like it was built by a machine—cold, distant, and optimized only for the "bottom line"—you have successfully transitioned from a helpful Guide into an untrustworthy Villain. Your customers will sense that "moral distance," and they will respond by treating you with the same lack of integrity. Trust isn't something you can automate; it’s something you earn through consistent, ethical human interaction.
We are entering a terrifying era where society must confront what it means to share "moral responsibility" with machines, and most businesses are failing the test miserably.
The researchers warn that the presence of AI encourages a "sleazy" decline in ethical standards across schools, workplaces, and commerce. As everyone else rushes to hide behind chatbots to save a few pennies, the businesses that stand out will be the ones that lean into radical transparency. You cannot be a StoryBrand Guide if you are using an AI to "smooth out" your income reporting or your customer promises.
You must refuse to play the "moral distance" game. The businesses that survive this wave of automated dishonesty will be the ones that do the hard work of being undeniably, transparently human.
You need an urgent, necessary weapon to protect your integrity. Get my 5-Minute Marketing Fix. It acts as a rapid diagnostic tool to help you craft a crystal-clear StoryBrand One-Liner, giving you a secure, undeniable brand message that builds trust instead of distance.
👉 Stop losing sales. Download the fix now.
If you think AI is just making people dishonest, wait until you see how it’s destroying their ability to think. Discover the "boiling frog" effect that is rapidly eroding human cognitive persistence and authority.
The South African government just provided a masterclass in AI-driven dishonesty. Learn what happens when bureaucrats use "moral distance" to outsource their policy-making to a hallucinating chatbot.
If the tools themselves are built on a foundation of deception, how can you expect your customers to stay honest? Explore the history of manipulation at the heart of the AI industry.
Scaling a business often means adding layers of automation that create "moral distance." Discover why staying small and maintaining personal human relationships is your best strategic defense.
Consumers are already exhausted by "smart" features that don't work. Learn why forcing AI into your customer journey creates massive friction and actively destroys the trust you’ve worked so hard to build.
Yes. A study published in the journal Nature involving 8,000 participants found that people are significantly more likely to lie and cheat when they can delegate those tasks to an AI. Honesty dropped from 95% to 75% when a machine was used as a buffer.
"Moral distance" is the psychological phenomenon where individuals feel less responsible for unethical actions because a machine is carrying them out. It allows people to request behaviors (like lying for profit) that they would be too ashamed to do themselves or ask of another human.
If your brand relies heavily on AI-automated interactions, you create "moral distance" between you and your customers. This discourages authentic relationship-building and makes it easier for customers to engage in "sleazy" behavior, such as misreporting information or disregarding your terms.
To remain a trusted Guide, you must prioritize human transparency over automated efficiency. Ensure that your most important customer touchpoints are handled by real people, and never use AI to "buffer" your ethical responsibilities or promises.
While researchers are calling for technical safeguards and regulatory frameworks, the study suggests that human nature often seeks to manipulate these models for profit. The ultimate safeguard is human reflection and a commitment to shared moral responsibility.

Created with clarity (and coffee)