Security • 8 min read • February 2026

15 Things Your Team Should Never Paste Into ChatGPT

Your employees are pasting sensitive data into AI tools right now. Here's what should never go in.

Let me guess: you haven't banned ChatGPT at your company.

Good. Banning it doesn't work anyway. People just use it on their personal phones.

But here's the problem: your team doesn't know what's safe to share and what isn't.

They're not malicious. They're not stupid. They just don't realize that everything they paste into ChatGPT is potentially being used to train OpenAI's models. (Yes, even if they have ChatGPT Plus. Unless they've specifically opted out in settings. Which they haven't.)

So here's the definitive list. Print it. Email it. Tattoo it on your forehead. Whatever it takes.

? 1. Customer Data

Never paste: Names, email addresses, phone numbers, addresses, purchase history, support tickets, or anything else that identifies a customer.

Why: GDPR, CCPA, and about 47 other privacy laws. Plus, you know, basic human decency.

Example of what NOT to do:
"Can you draft a response to this customer complaint: [pastes email from Karen Smith, karen.smith@email.com, including her order number and shipping address]"

Better approach:
"Can you draft a response to a customer who received a damaged product and is requesting a replacement? They're upset but polite."

? 2. Employee Information

Never paste: SSNs, salaries, performance reviews, disciplinary records, medical information, or home addresses.

Why: HR lawsuits are expensive. Also, it's creepy.

What people actually do:
"Help me write a performance review for John Doe (hired 3/2021, salary $85K, consistently late to meetings)"

What they should do:
"Help me write a performance review addressing punctuality issues while acknowledging strong technical contributions"

? 3. Financial Data

Never paste: Credit card numbers, bank account details, revenue figures, profit margins, budget breakdowns, or financial forecasts.

Why: Because your competitors would love to see your Q4 projections.

Bad idea:
"Analyze this P&L statement and find areas to cut costs: [pastes entire financial report]"

Smart approach:
"What are common cost-cutting strategies for a SaaS company with $2-5M ARR?"

? 4. Passwords or API Keys

Never paste: Passwords, API keys, OAuth tokens, SSH keys, database credentials, or anything that grants access to anything.

Why: Do I really need to explain this one?

True story: An engineer once pasted a bug report that included his AWS API key. ChatGPT's training data now contains that key. Hope he rotated it.

? 5. Proprietary Code

Never paste: Your company's source code, algorithms, database schemas, or anything else that gives away how your product works.

Why: Your IP is the only thing keeping you alive in a competitive market.

Exception: Generic code snippets that don't reveal business logic are probably fine. "How do I sort an array in JavaScript?" = safe. "Here's our entire recommendation algorithm" = not safe.

? 6. Legal Documents

Never paste: Contracts, NDAs, settlement agreements, legal opinions, or anything with a lawyer's name on it.

Why: Attorney-client privilege doesn't cover "and also ChatGPT."

If you need legal help: Hire a lawyer. Or at least anonymize everything before asking ChatGPT.

? 7. Medical Information

Never paste: Patient records, diagnoses, treatment plans, or anything covered by HIPAA.

Why: HIPAA violations can cost up to $50,000 per record. Do the math.

Healthcare workers: Use HIPAA-compliant AI tools if you need AI assistance. They exist. ChatGPT is not one of them.

? 8. Internal Communications

Never paste: Slack conversations, email threads, meeting transcripts, or internal memos that discuss strategy, problems, or opinions about people.

Why: "Hey ChatGPT, can you summarize this Slack thread where we discuss firing Dave?" is a terrible idea for SO many reasons.

? 9. Merger & Acquisition Info

Never paste: Anything related to potential acquisitions, partnerships, or business deals that haven't been publicly announced.

Why: Insider trading laws. Securities fraud. The SEC takes this stuff seriously.

? 10. Security Vulnerabilities

Never paste: Details about your security setup, known vulnerabilities, penetration test results, or incident response plans.

Why: You're literally handing attackers a roadmap to your weaknesses.

Bad: "We discovered a SQL injection vulnerability in our payment system. How do we fix it? [pastes vulnerable code]"

Better: "What are best practices for preventing SQL injection in [programming language]?"

? 11. Trade Secrets

Never paste: Recipes, formulas, manufacturing processes, supplier lists, or anything that gives you a competitive advantage.

Why: Trade secrets only stay secret if you treat them like secrets.

Famous example: Coca-Cola's formula has been a trade secret for over 100 years. Because they don't paste it into AI chatbots.

? 12. Government Classified Info

Never paste: Anything marked classified, confidential, secret, or top secret.

Why: Federal prison is unpleasant.

If you work with government contracts: Check your ITAR/EAR compliance requirements. Using AI on controlled data might violate your contract.

? 13. Unreleased Product Details

Never paste: Features, roadmaps, launch dates, or pricing for products that haven't been announced yet.

Why: Competitors read AI training data too. Probably.

Especially bad: Asking ChatGPT to write your product launch announcement before the product is announced. Surprise ruined.

? 14. Personal Identifying Information (Anyone's)

Never paste: SSNs, driver's license numbers, passport numbers, or anything that could be used for identity theft.

Why: Because identity theft ruins lives.

This includes: Your own PII, your family's, your employees', your customers'. Nobody's PII goes in ChatGPT.

? 15. Anything You Wouldn't Post on Twitter

Final rule: If you wouldn't tweet it publicly, don't paste it into ChatGPT.

Why: Because even though OpenAI says they're careful with training data, even though they offer opt-outs, even though they claim enterprise privacy protections...

Better safe than sorry.

What CAN You Put in ChatGPT?

Okay, so what's actually safe?

  • Public information - Anything already available online
  • Generic questions - "How do I write a good email subject line?"
  • Learning/education - "Explain quantum computing like I'm 10"
  • Brainstorming - "Give me 20 blog post ideas about AI security"
  • Anonymized scenarios - "How should a manager handle an employee who's chronically late?"
  • Creative work - Writing, art prompts, story ideas (that don't include sensitive info)

The key is anonymization and generalization. Remove specifics. Change names. Strip out identifying details.

How to Actually Prevent This

Telling people "don't do this" doesn't work. You need systems.

1. Train Your Team

Seriously. Two-hour session. Show them real examples of data breaches caused by AI misuse. Make it visceral.

? We run exactly this training

2. Create a Clear Policy

Write it down. Make it accessible. Update it regularly. Make people sign it.

Example policy:

AI Usage Policy (Short Version):

Before pasting anything into an AI tool, ask: "Would I be comfortable if this appeared in a competitor's training data?" If no, don't paste it.

3. Use Enterprise AI Tools

ChatGPT Enterprise, Claude for Work, and similar tools offer stronger privacy guarantees. Your data doesn't train their models.

But: You still need to train people. Because they'll still use the free version on their phones if they don't know better.

4. Monitor and Audit

Spot-check usage. Ask people what they're using AI for. Create a culture where it's normal to discuss AI tool usage openly.

? How to audit AI usage in 30 minutes

5. Test Your Team

Send fake scenarios. See who pastes sensitive data into ChatGPT. Then train them (kindly) instead of punishing them.

? We run AI security assessments

The Real Risk

Here's the thing: most data breaches don't come from hackers breaking through your firewall.

They come from Janet in Accounting who thought it was okay to paste customer data into ChatGPT because she was trying to save time.

Janet's not the problem. The lack of training is the problem.

Your team wants to use AI. They should use AI. It makes them more productive, more creative, more effective.

But they need to know the rules. And the rules need to be clear, simple, and enforced.

Summary: The 15 Things

  1. Customer data
  2. Employee information
  3. Financial data
  4. Passwords or API keys
  5. Proprietary code
  6. Legal documents
  7. Medical information
  8. Internal communications
  9. M&A information
  10. Security vulnerabilities
  11. Trade secrets
  12. Government classified info
  13. Unreleased product details
  14. Personal identifying information
  15. Anything you wouldn't post on Twitter

Bonus rule #16: When in doubt, leave it out.

Need Help Training Your Team?

We run 2-hour AI security workshops that teach your team exactly what's safe and what's not. Real examples. Clear guidelines. Zero technical jargon.

Schedule Training ?

PROTECT YOUR DATA

Train your team on AI security before they become a statistic.

Get Started ?