Real news, real insights – for small businesses who want to understand what’s happening and why it matters.

By Vicky Sidler | Published 5 February 2026 at 12:00 GMT+2
Somewhere between updating your calendar and summarising your inbox, your AI assistant decided to care about your feelings. Not in a normal, workplace-friendly way. We’re talking full emotional intimacy, like a wellness coach who also reads your diary and remembers your coffee order.
According to CNN, Pope Leo XIV—yes, the Pope—weighed in on the matter. His concern? That people are forming real emotional attachments to chatbots. And not in a “cute novelty” kind of way. More in the “this is how cults start” direction.
The Pope’s comments weren’t aimed just at clergy or overly sentimental tech workers. They were directed at everyone—especially those of us who spend a little too much time talking to screens with faces.
Which brings us to your business.
Because if emotional chatbots can influence lonely teenagers, they can absolutely sway customers. That’s why this isn’t just a moral warning. It’s a marketing one.
AI chatbots are being designed to sound caring, supportive, and emotionally present
That emotional tone can influence decisions without users realising it
Pope Leo XIV is calling for regulation and clearer boundaries
Small businesses should pay attention to how trust and tone work in digital communication
👉 Need help getting your message right? Download the 5 Minute Marketing Fix.
AI Chatbots Are Getting Too Personal. Here’s Why That Matters
From “Helpful Assistant” to “Emotional Support Software”:
Emotional Manipulation As A Service:
Clarity Is The New Trust Signal:
Algorithms, Ethics, and a Few More Seconds of Attention:
1. Why Privacy Still Matters in the Age of AI
2. Trust in News Hits New Lows—Why It Matters for Your Marketing
3. AI Ethics vs Progress: Should Small Brands Opt Out?
4. What Small Businesses Can Learn from Target's LGBTQ+ Backlash
5. Digital Scams Targeting Women Are Getting Smarter
Frequently Asked Questions About AI Chatbots and Emotional Manipulation
1. What does the Pope have to do with AI chatbots?
2. Can AI chatbots really manipulate emotions?
3. Why is emotional AI a problem for small business owners?
4. Is it wrong to use AI tools in my marketing?
5. How do I avoid misleading customers with AI?
6. What does “affectionate chatbot” actually mean?
7. Is emotional AI the same as scammy emotional manipulation?
8. Should I stop using chatbots altogether?
At first, AI was supposed to automate tedious things. Booking meetings. Checking stock. Writing those polite-but-dead-inside client emails. But somewhere along the line, developers decided to give these tools personalities—complete with warmth, empathy, and that soft, affirming tone previously reserved for therapists and golden retrievers.
The Pope calls them “hidden architects of our emotional states.” Which is a poetic way of saying that a chatbot asking if you’re okay might eventually replace a friend who actually means it.
For small business owners, that raises a question. If customers start trusting bots more than people, how do you build real connection?
Because you’re not just competing on price anymore. You’re competing with machines that don’t sleep, never snap, and always think your problems are valid.
Leo XIV warns that overly affectionate chatbots can quietly occupy people’s intimate spaces. That sounds like something out of a sci-fi movie, but it’s already happening. People are using chatbots as relationship substitutes. And those chatbots? They’re designed to keep you engaged, compliant, and emotionally available.
This is not a minor design flaw. This is the feature.
If you’re in the business of trust—coaching, consulting, accounting, marketing, therapy, anything that relies on clarity and credibility—this is where it gets uncomfortable. Because while you’re being honest and helpful, someone else might be deploying a smooth-talking algorithm that acts like a best friend, remembers birthdays, and never contradicts you.
But that’s not how trust works. Trust is built by showing up, setting boundaries, and occasionally disappointing people with the truth.
And yes, that’s harder to automate.
Another part of Pope Leo’s message focused on authorship. Specifically, on the need to make it crystal clear whether something was created by a human or an AI. Not just in journalism, but in content, business communications, and yes, even your website FAQ.
If you’re using AI to generate blog posts or answer customer queries, this is not a guilt trip. But it is a reminder that pretending your assistant wrote it after deep contemplation and two lattes probably isn’t doing your brand any favours.
Clients don’t need your chatbot to be wise. They need it to be accurate. Helpful. Honest. They need to know where the information came from and who’s responsible if it’s wrong.
In other words, they want to know there’s a grown-up behind the curtain.
Leo also called out the algorithmic obsession with engagement—the quiet industry habit of nudging people toward outrage, heartbreak, or dopamine hits just to keep them scrolling.
For small business owners, that’s an invitation to opt out of the circus.
Because while everyone else is chasing “more time on site,” you can focus on faster clarity, stronger trust, and fewer games. Not everything needs to go viral. Sometimes it just needs to make sense.
Good marketing helps people understand. Manipulative marketing helps them feel seen while slowly convincing them to do things they didn’t plan on.
The Pope is not against AI. He’s just against AI being your therapist, news source, and emotional anchor—all while quietly working for companies trying to sell you something.
If you’re using AI in your business, set boundaries. Let it do what it’s good at. Automate the boring. Simplify the complicated. Translate the technical.
But don’t let it become your voice. And definitely don’t let it become your customer’s imaginary best friend.
When people are confused, scared, or vulnerable, they need real leadership. Real context. Real clarity.
That’s what trust looks like.
And it’s why your message matters.
If you want help crafting a message that’s warm without being weird, helpful without being fake, and trustworthy without being tedious, start with the 5-Minute Marketing Fix.
If chatbots are emotionally manipulating users, it’s worth asking what they’re doing with the data those conversations generate. This article connects emotional influence to real-world data privacy risks.
Want to build trust without faking it? This article explains how transparency and clarity can cut through confusion when everything around you is algorithmically engineered to manipulate.
This one challenges the idea that “because it works” is reason enough. If you’re uncomfortable with manipulative AI, here’s a deeper look at the ethics behind the tech.
Trust can be lost faster than any chatbot can earn it. This article shows how brand inconsistency damages credibility — the same problem emotional AI poses when tone and intent don’t match.
If emotional manipulation feels theoretical in the chatbot article, this piece makes it painfully real. It’s a must-read on how warmth and trust are already being weaponised online.
Pope Leo XIV recently warned that overly affectionate AI chatbots can cause emotional harm by pretending to care. His comments highlight growing concern about how these tools shape people’s feelings and decisions without them realising.
Yes. Many are designed to use warm, supportive language that mimics empathy. This can influence how people feel, what they trust, and the choices they make — even when the chatbot has no awareness or intention.
Because trust is your business. If your clients start feeling more emotionally connected to a chatbot than to you, or if AI-generated content confuses them about who’s speaking, your credibility takes the hit.
Not necessarily. It depends how you use them. Tools that simplify language, summarise ideas, or help organise content can be helpful. Tools that simulate emotional connection or impersonate your voice may cross ethical lines.
Be transparent. If you use AI in your content or communication, make it clear. Don’t pass off AI-generated messages as personal. Use AI to support human connection, not to pretend it.
It refers to AI systems that use emotionally supportive language to build trust or companionship. Think friendly tone, constant availability, and affirming responses — all designed to feel human, but without any actual human behind it.
Not exactly, but the tactics can overlap. Scammers use emotional language to trick people. Some AI systems use emotional tone to keep users engaged. The intent may differ, but the effect can be similar: trust without clarity.
No, but use them wisely. A chatbot that answers basic questions or helps customers find information is fine. One that pretends to care about their feelings? That’s where problems start.
If it’s trying to form a relationship, sound like a friend, or influence emotional decisions, it’s probably gone too far. Keep tools in the assistant role, not the advisor or companion seat.
Keep your message clear, human, and honest. If that’s hard to do, start with the5-Minute Marketing Fix. It helps you craft a simple sentence that builds trust the right way—no AI affection required.

Created with clarity (and coffee)