A
Ask My Tech Guy
Education

Your Client Data Is Going Into ChatGPT — Have You Thought About What That Means?

By Bijan Stephens April 21, 2026 5 min read

I'm not here to scare you off AI. I use it every day. But I've watched small business owners paste full client contracts, employee records, and financial reports into ChatGPT without a second thought — and that deserves a real conversation.

According to data from cybersecurity researchers, roughly 35% of what employees paste into AI tools contains sensitive business information. Most of the people doing it have no idea what happens to that data after they hit send. Let's fix that.

What ChatGPT's Default Settings Actually Are

By default, when you use ChatGPT's free tier, your conversations may be used to train OpenAI's models. That means the content you paste — the client name, the contract language, the numbers — could theoretically be used as training data.

This is not buried. OpenAI is transparent about it. But the setting that controls it is not on by default in the way most people assume "off" means.

How to turn it off right now:

1. Open ChatGPT and click your profile icon in the bottom left
2. Go to Settings → Data Controls
3. Toggle off "Improve the model for everyone"

That's it. Once that's off, your conversations are not used for model training. You should do this today if you haven't.

Note: If you're on ChatGPT Team or Enterprise, training on your data is off by default and governed by a separate agreement. The above applies to free and individual Plus accounts.

What You Should Never Paste Into a Free AI Tool

Even with that setting off, there's information that simply shouldn't go into any consumer AI tool — free or paid — without a business agreement in place.

The practical workaround: anonymize before you paste. Instead of pasting "John Smith's contract for $48,000 worth of HVAC work," paste "a client contract for $48,000 of service work" and ask your question. You get the same answer. The data stays yours.

What Claude and Gemini Do Differently

Claude (Anthropic): Anthropic's policy is that they do not train on free-tier conversations by default. Their privacy documentation is more explicit about this than most competitors. That said — they still retain conversations for safety and trust purposes. Don't treat it as a vault.

Gemini in Google Workspace: If your business uses Google Workspace (the paid version), your data is governed by Google's Workspace data processing agreements — which are significantly more protective than consumer product terms. Your conversations in that context are not used for training. This is one underrated advantage of the Workspace version over the free consumer Gemini.

"If you wouldn't email it to a stranger, don't paste it into a free AI tool."

The Bottom Line

This is not a reason to stop using AI. It's a reason to use it with the same basic judgment you apply to anything else in your business. You lock your filing cabinet. You use a shredder. You don't email client SSNs in plain text. Apply the same instincts here.

Three things to do this week:

  1. Turn off "Improve the model for everyone" in your ChatGPT settings
  2. Make a short list of data types you'll never paste into any AI tool
  3. If you have employees using AI tools, have a five-minute conversation with them about this — most of them haven't thought about it either

That's the whole playbook. No paranoia required.

Sound familiar?

If you're using AI tools with a team and want a simple, practical data policy your people will actually follow — book a call and let's make it a one-page document. No lawyer required.

Book My Free Discovery Call →
← Back to Blog