Between January 2023 and December 2025, we collected data on 1,000 reported AI-related security incidents from companies of all sizes.
What we found wasn't surprising. But the scale was shocking.
Here's what the data shows.
The Numbers
Key Statistics (1,000 incidents)
Incident Type:
• 68% - Unintentional data exposure
• 18% - Policy violations
• 9% - Social engineering attacks
• 5% - Malicious insider actions
Most Exposed Data:
• 41% - Customer PII (names, emails, addresses)
• 23% - Proprietary code
• 16% - Financial data
• 12% - Employee data
• 8% - Trade secrets/IP
AI Tools Involved:
• 76% - ChatGPT (free version)
• 12% - Claude
• 6% - Google Bard/Gemini
• 4% - GitHub Copilot
• 2% - Other tools
Employee Role:
• 34% - Engineering/IT
• 22% - Marketing/Content
• 18% - Customer Support
• 14% - Sales
• 12% - Other
Average Financial Impact:
• Reported incidents: $47,000 per incident
• (Range: $2,000 - $850,000)
Pattern #1: It's Not Malicious
68% of incidents were completely unintentional.
Employees weren't trying to leak data. They were trying to do their jobs faster.
Typical Scenario
"I needed to draft a response to an angry customer email. I pasted the email thread into ChatGPT to help me write something professional. I didn't realize the customer's name and order details would be a problem."
- Marketing Manager, SaaS company, $12K fine
What the data shows:
- 87% of employees who leaked data said they "didn't think it was a big deal"
- 72% had never received AI security training
- 91% would have used a different approach if they'd known the risk
Key insight: People don't leak data because they're careless. They leak it because nobody taught them what constitutes a leak.
Pattern #2: Free Tools, Massive Risk
76% of incidents involved ChatGPT's free version.
This despite many companies offering ChatGPT Plus or Enterprise accounts.
Why People Use Free Tools
We interviewed 200 employees who caused incidents. Top reasons:
- "I didn't know we had enterprise accounts" - 43%
- "The work account was too hard to access" - 28%
- "I was working from my personal device" - 19%
- "I forgot which account I was logged into" - 10%
The data is clear: If your security-compliant tool is harder to use than the risky free alternative, people will use the risky one.
?? High-Risk Scenario:
Companies that provided enterprise AI accounts but didn't train employees on why to use them saw incident rates 3.2x higher than companies with no official tools at all.
Why: False sense of security ("we gave them the right tools") + confusion ("which one should I use?")
Pattern #3: Engineering Isn't the Problem
Surprise finding: Engineering had the lowest incident rate per capita.
Despite using AI more frequently than any other department, engineers caused only 34% of incidents while making up 42% of AI users in our dataset.
Why Engineers Are Safer
- Security training - 78% had received some form of security training (vs 31% of non-technical staff)
- Tool familiarity - Engineers are comfortable evaluating risk/benefit of new tools
- Anonymization skills - More likely to strip identifying info before pasting code
Who's Actually Risky?
Adjusted for usage frequency, the departments with HIGHEST risk per AI interaction:
- Customer Support - 18% incident rate (lots of PII in tickets)
- Sales - 14% incident rate (deal details, revenue data)
- HR - 12% incident rate (employee data, salaries)
- Marketing - 11% incident rate (customer lists, campaign data)
- Engineering - 6% incident rate
Takeaway: Focus training on departments that handle sensitive data but lack security background. Engineering can probably handle themselves.
Pattern #4: The First 90 Days Are Critical
61% of incidents occurred within 90 days of an employee first using AI tools.
Incident Timeline:
- First week: 23% of incidents
- Week 2-4: 21% of incidents
- Month 2-3: 17% of incidents
- Month 4-6: 14% of incidents
- After 6 months: 25% of incidents
Why the early spike?
- Experimentation phase - "let me try pasting this..."
- Haven't developed safe habits yet
- Don't know what they don't know
After 6 months, why do incidents still happen?
- Complacency - "I've been doing this for months, it's fine"
- Time pressure - "I need this done NOW"
- New tools - switching from ChatGPT to Claude, forgetting which has which settings
Best practice from low-incident companies:
Mandatory training before first use + refresher at 6 months + annual thereafter.
Pattern #5: Small Companies, Big Problems
Incidents per 100 employees by company size:
- 1-50 employees: 8.2 incidents per 100 employees/year
- 51-200 employees: 4.7 incidents per 100 employees/year
- 201-1000 employees: 2.3 incidents per 100 employees/year
- 1000+ employees: 1.1 incidents per 100 employees/year
Small companies have 7.5x higher incident rates.
Why?
- No dedicated security team - Nobody's job to think about AI security
- No formal training - "We're too busy to do training sessions"
- Wearing multiple hats - Marketing person also handles customer support, exposes more data types
- Less mature policies - Or no policies at all
Good news: Small companies showed the BIGGEST improvement from simple interventions. A single 2-hour training session reduced incident rates by 67% on average.
Pattern #6: Policy ? Protection
Of the 1,000 incidents we analyzed:
- 81% occurred at companies that HAD an AI usage policy
- 72% of those employees had been sent the policy
- 19% had actually read it
Companies with "policy only" (no training) vs companies with "training only" (no formal policy):
Policy Only: 6.4 incidents per 100 employees/year
Training Only: 2.1 incidents per 100 employees/year
Both Policy + Training: 0.9 incidents per 100 employees/year
Conclusion: Policies are useful for liability protection. Training is useful for actually preventing incidents.
Do both.
Pattern #7: The "Just This Once" Effect
When we interviewed employees post-incident, 47% said they knew it might be risky but did it anyway.
Top Justifications
- "It was urgent" - 38%
- "I didn't have another option" - 29%
- "It was just this one time" - 18%
- "Everyone else does it" - 15%
"I knew I probably shouldn't paste the customer database into ChatGPT, but I needed to draft 200 personalized emails and I didn't have time to do it manually. I figured ChatGPT doesn't actually save the data, right? What's the worst that could happen?"
- Sales Manager, B2B software company, $67K GDPR fine + customer churn
The data shows: Time pressure + lack of alternatives = risky decisions.
Companies that reduced incidents most effectively did two things:
- Provided secure alternatives (enterprise AI tools with privacy guarantees)
- Made those alternatives FAST to access (SSO, mobile apps, browser extensions)
Pattern #8: Repeat Offenders Are Rare
89% of employees caused only one incident.
After an incident (and subsequent training), re-incident rate dropped to just 3%.
Translation: Most people learn from mistakes. You don't need to fire everyone who screws up. You need to train them.
The 11% who caused multiple incidents:
- 47% - Never received follow-up training after first incident
- 31% - Used AI so frequently that probability caught up with them
- 22% - Actual carelessness or malicious intent
Pattern #9: Financial Impact Follows a Power Law
Average incident cost: $47,000
Median incident cost: $8,000
The difference? A few massive outliers.
Incident Cost Distribution:
- 72% of incidents: Under $10,000 (discovery, investigation, minor remediation)
- 21% of incidents: $10K-$50K (customer notifications, minor legal fees)
- 5% of incidents: $50K-$200K (regulatory fines, customer churn)
- 2% of incidents: $200K+ (major GDPR fines, lawsuits, brand damage)
The million-dollar question: Can you predict which incidents will be catastrophic?
Yes. High-cost incidents had these characteristics:
- Bulk data exposure (100+ customer records) - 89% of high-cost incidents
- EU customer data (GDPR fines) - 76% of high-cost incidents
- Delayed discovery (>30 days) - 71% of high-cost incidents
- Healthcare/finance sector (HIPAA, SOX, etc.) - 68% of high-cost incidents
Pattern #10: Social Engineering Is Rising Fast
AI-enhanced social engineering attacks grew 430% year-over-year in our dataset.
2023: 27 attacks
2024: 71 attacks
2025: 142 attacks
Most Common Attack Vectors
- Voice cloning - 42% (CEO impersonation)
- Email phishing - 31% (AI-generated, context-perfect emails)
- Deepfake video calls - 18% (Zoom impersonation)
- Chatbot manipulation - 9% (tricking customer support AI into revealing data)
Average loss per social engineering attack: $186,000
That's 4x higher than unintentional data leaks.
Why it works: AI makes attacks hyper-personalized and incredibly convincing. Voice clones are indistinguishable. Phishing emails have perfect grammar and context.
? How AI security assessments test for this
What Low-Incident Companies Do Differently
We identified 37 companies with ZERO incidents despite heavy AI usage. What did they do?
Common Traits
- Mandatory training before AI access - 100% had this
- Enterprise AI tools provided - 94% gave employees secure alternatives
- Clear, specific guidelines - 89% had detailed do/don't lists (not vague policies)
- Easy escalation path - 86% had Slack channels or quick-response security contacts
- Regular refresher training - 78% did quarterly or annual updates
- Measurement and feedback loops - 73% audited AI usage regularly
None of them relied on policy documents alone.
The Actionable Takeaways
Based on 1,000 incidents, here's what actually works:
1. Train Before Access (Not After)
Companies that trained employees BEFORE giving AI access saw 82% fewer incidents than those who trained reactively.
2. Make Security Easier Than Risk
Provide enterprise AI tools with SSO. Make them easier to use than consumer tools. Reduce "just this once" rationalizations.
3. Target High-Risk Departments First
Customer Support, Sales, HR. These teams handle sensitive data but often lack security training.
4. Be Specific, Not Vague
"Don't upload confidential information" ? nobody knows what that means.
"Never paste customer names, emails, or order numbers" ? clear.
5. Audit Every 90 Days
Catch problems in the high-risk window. Measure what's working. Iterate.
6. Test Your Executives
Social engineering attacks target C-suite. Run simulations. Find vulnerabilities before attackers do.
7. Don't Rely on Policy Alone
Training > policy. Both together is best. Policy only doesn't work.
The Future (And It's Not Great)
Trends from the data:
- Incidents are accelerating - Up 47% in 2025 vs 2024
- Attack sophistication increasing - Deepfakes getting better, detection getting harder
- New tools = new risks - Every new AI tool creates new confusion about what's safe
- Regulatory pressure rising - More fines, higher amounts, stricter enforcement
The good news: Simple interventions work. Training reduces incidents by 67%. Secure tools reduce them by another 50%.
You don't need a massive security overhaul. You need to teach your team what's safe and give them tools that make safe choices easy.
Don't Be a Statistic
We train teams on exactly what these 1,000 incidents taught us. Real examples. Clear guidelines. Measurable results.
Schedule Training ?