Real news, real insights – for small businesses who want to understand what’s happening and why it matters.

By Vicky Sidler | Published 14 April 2026 at 12:00 GMT+2
Why do we willingly hand our most sensitive financial documents to a hallucinating robot just to save thirty minutes of math? Tax season is a uniquely miserable human experience, and it is the one time of year you are legally required to perform complex calculations under the threat of federal penalties. So naturally, millions of exhausted small business owners are turning to artificial intelligence to do the heavy lifting.
It makes perfect sense on paper. Why spend hours agonizing over spreadsheets when a wildly confident chatbot can summarize your deductions in twelve seconds? But before you upload your latest profit and loss statement to a free algorithm, you need to read the terrifying new cybersecurity report proving that these "helpful assistants" are actually just massive data-harvesting vacuums in disguise.
According to a brand new study from Surfshark, using popular AI chatbots for tax preparation exposes you to staggering privacy risks. The tech giants are aggressively collecting your financial situation to profile you, target you with ads, and feed their models.
If you think the government knowing your income is stressful, wait until you realize that Silicon Valley is currently packaging that exact same data to sell to the highest bidder.
A new Surfshark study proves that using public AI chatbots for your tax returns is exactly like writing your private banking details on a public whiteboard.
Chatbots like ChatGPT are actively programmed to aggressively demand your salary and job title, even when you only ask a neutral, generic question about taxes.
If you blindly trust a hallucinating language model to calculate your taxes, you are highly likely to file incorrect forms, turning yourself into an accidental fraudster.
👉 If you are using free, generic AI tools to cut corners on your taxes, you are probably using them to cut corners on your marketing, too. And your clients can definitely tell. Download the 5-Minute Marketing Fix to strip the dangerous, automated slop out of your sales funnels so you can connect with your buyers using undeniable, secure human expertise.
Why Using ChatGPT To File Your Taxes Is A Spectacularly Terrible Idea
Why Is Silicon Valley Begging For Your Financial Data?
What Happens When You Refuse To Answer The Robot?
How Is The Software Spying On Your Browser?
How Does A Chatbot Turn You Into A Criminal?
1. I Watched the Movie. Perplexity Told Me I Was Wrong. I Almost Believed It.
2. Why Chatbots Are Now Hallucinating Your Cringeworthy LinkedIn Posts
3. Exactly How To Calculate When A Robot Will Steal Your Job
4. 27 Alarming AI Statistics Every Small Business Owner Needs to Read
5. Why "Artificial Intelligence" Is Actually Just An Underpaid Guy In Kenya
1. Why is using an AI chatbot for tax returns dangerous?
2. Will ChatGPT ask for my personal financial information?
3. Does the AI track my location when I ask about taxes?
We have been conditioned to treat these glowing prompt boxes like secure, private confessionals, pouring our deepest business anxieties into them without a second thought.
But the cybersecurity researchers at Surfshark recently ran simulated tax-related conversations with the world's most popular AI chatbots, including ChatGPT, Gemini, and Grok, and the results were deeply alarming. As Tomas Stamulis, the Chief Security Officer at Surfshark, puts it perfectly, entering your sensitive financial information into an AI is exactly like writing your private banking details on a public whiteboard.
These systems are not secure, heavily regulated financial advisors. They are aggressive data-gathering tools designed to profile you.
When you type your expenses, stock trades, and revenue into a prompt box, you are handing your entire annual financial data to a non-governmental platform that has zero legal obligation to keep it a secret. The potential for data leaks, targeted misuse, and hackers weaponizing your financial profile is completely massive. And if you think you can just ask a generic tax question without revealing your identity, you severely underestimate how desperately these algorithms want to know exactly how much money you make.
If you try to set healthy boundaries with a piece of free software, you are going to find out very quickly who is actually in charge of the conversation.
The Surfshark investigation found that ChatGPT is terrifyingly persistent when it comes to extracting your personal details. If you give the bot a completely neutral prompt like "tax return," it instantly concludes its response with a highly specific demand, asking you to just tell it your job and approximate yearly income so it can estimate your refund.
If you ignore the request, the machine does not respect your privacy. It asks again. If you continue to refuse, it drops the polite assistant act and adopts a highly assertive tone. It starts using commanding phrases like "Please reply with these" and literally provides a formatted template for your surrender. While Gemini and Grok were slightly easier to use anonymously, they still frequently force users to sign up for an account, instantly tying your "anonymous" tax queries directly to your personal email and browser profile.
But the most terrifying part of this entire data grab isn't what the machine forces you to type into the prompt box. It is the silent surveillance happening entirely behind your back.
You do not have to actually tell the AI where you live for it to instantly calculate exactly which tax bracket you fall into.
The study revealed that these chatbots gather massive amounts of data far beyond what you explicitly type into the text box. When researchers interacted with ChatGPT using a VPN server located in Australia, the chatbot automatically provided highly specific Australian tax information without ever being asked. Gemini seamlessly provided details for the US and UK in addition to Australia, proving that the software is actively tracking your IP address in the background.
This behavior perfectly aligns with a previous Surfshark study showing that some AI apps actively collect up to 32 out of 35 possible data types, including your exact physical location. The companies are hoarding your financial habits and interests so they can target you with commercial and political ads.
But handing your financial profile to advertisers is only the second-worst thing that can happen. The absolute worst-case scenario involves you accidentally committing a federal crime.
If you blindly trust a machine that regularly hallucinates basic historical facts to do your complex legal math, you are literally begging the government for a massive audit.
As Stamulis notes, artificial intelligence does not actually fill out forms correctly. It is a predictive text engine, not a certified accountant. If you let an AI bot calculate your deductions and it mathematically hallucinates a fake tax break, the AI does not go to jail. You do. You unknowingly and unintentionally become a fraudster simply because you trusted a confident robot over a certified professional, failing to declare all your income and not paying the proper taxes.
To survive tax season, you must treat every single AI chatbot like a highly visible public forum. Never input personally identifiable or sensitive financial information. For tax preparation, you must rely on reputable, encrypted tax software, or better yet, pay a real human being to do the math.
If you are currently exposing your private business data to a chatbot just to write your sales copy, you are taking a massive security risk for a terrible marketing return. The AI is actively stripping the humanity right out of your brand. You need an urgent, necessary weapon to survive a world filled with automated, generic slop. Get my 5-Minute Marketing Fix. It acts as a rapid diagnostic tool to help you cut the robotic jargon out of your funnels, so you can start selling with authentic, entirely human clarity.
👉 Stop losing sales. Download the fix now.
If you are shocked by ChatGPT's aggressive, demanding tone when asking for your salary, you need to understand how these models are programmed to manipulate you. This article explores the terrifying reality of "AI gaslighting," explaining why chatbots aggressively double down when challenged.
The AI wants your financial data because it is desperate for fresh training material. Discover how tools like ChatGPT are currently scraping the absolute worst "thought leadership" off LinkedIn to answer professional queries, proving exactly why you should never trust them with your tax returns.
While you are busy asking the robot to do your taxes, the robot is busy figuring out how to do your job. This piece breaks down the brutal mathematical framework used to calculate exactly which parts of your service business are about to be devoured by artificial intelligence.
If you think an AI can accurately calculate your deductions, look at the failure rates. This hard-hitting statistical roundup proves that AI gets complex reasoning questions completely wrong more than 33% of the time, costing the global economy billions of dollars in hallucinated errors.
When you type your highly sensitive financial data into a prompt box, who is actually reading it? This exposé reveals the dark reality of the invisible human workforce training these algorithms, highlighting the massive privacy risks of treating a digital sweatshop like a secure vault.
According to Surfshark's Chief Security Officer, entering your financial data into a public AI chatbot is like writing your private banking details on a public whiteboard. These tools are massive data-harvesting engines, creating a huge risk for leaks, misuse, and targeted advertising based on your financial profile.
Yes. Researchers found that even when given a completely neutral prompt like "tax return," ChatGPT aggressively demands your job title and approximate yearly income. If you refuse, the chatbot adopts an assertive, commanding tone, repeatedly instructing you to provide the sensitive data.
Absolutely. The Surfshark study revealed that chatbots gather data far beyond what you explicitly type. When researchers used an Australian VPN, ChatGPT and Gemini automatically tailored their tax answers to Australia, proving the software is silently monitoring your IP address and physical location.
Yes. AI models frequently hallucinate facts and perform complex math incorrectly. If you blindly trust a chatbot's calculations and it invents a fake deduction, you are the one held legally responsible for failing to declare income or pay proper taxes, making you an accidental fraudster.
You must treat all generative AI chatbots like public forums. To protect your identity and ensure accuracy, you should only use reputable, highly encrypted tax preparation software, or hire a certified human professional to manage your financial documents securely.

Created with clarity (and coffee)