Enterprises Are Flying Blind Into the AI Security Crisis (and the Numbers Prove It)
A near-trillion-transaction analysis reveals a sobering truth: organizations are adopting AI faster than they can secure it, and attackers are already exploiting the gap.
The Great AI Security Disconnect
Something strange is happening across enterprise technology. Companies are racing to deploy AI tools, integrate machine learning into workflows, and automate everything from customer service to code development. Yet when security researchers decided to actually test how well these systems hold up under attack, they found something alarming: every single enterprise AI system they examined contained critical vulnerabilities.
That’s not a typo. One hundred percent failure rates.
Zscaler’s ThreatLabz 2026 AI Security Report analyzed nearly one trillion AI and machine learning transactions across approximately 9,000 organizations throughout 2025. What they discovered paints a picture of an industry barreling toward a cliff while simultaneously pressing the accelerator.
The median time to first critical failure during red team testing was just 16 minutes, with 90% of systems compromised in under 90 minutes. In the most extreme case documented, defenses crumbled in a single second.
Think about that timeline. Security teams typically measure response times in hours or days. Attackers can now measure compromise times in minutes.
Five Key Takeaways
1. Every Enterprise AI System Tested Had Critical Vulnerabilities
Zscaler’s red team testing found critical flaws in 100% of enterprise AI systems analyzed, with a median time to first critical failure of just 16 minutes. In the most extreme case, security defenses were bypassed in a single second—demonstrating that AI systems break almost immediately under real adversarial conditions.
2. 18,000 Terabytes of Corporate Data Flowed Into AI Platforms
Enterprise data transfers to AI and ML applications surged 93% year-over-year to 18,033 terabytes, transforming tools like Grammarly and ChatGPT into massive repositories of corporate intelligence. This exposure generated 410 million data loss prevention policy violations tied to ChatGPT alone, including attempts to share Social Security numbers, source code, and medical records.
3. Shadow AI Is Bypassing Enterprise Security Controls
Approximately 77% of employees paste data into generative AI tools, and 82% of this activity occurs through personal accounts that operate entirely outside corporate oversight. Traditional data loss prevention systems weren’t designed for copy-paste workflows, leaving security teams blind to the majority of sensitive data transfers.
4. AI Applications Quadrupled While Visibility Collapsed
The number of applications driving AI/ML transactions exploded to more than 3,400, a fourfold year-over-year increase that has left many organizations without a basic inventory of active AI models or embedded features. Finance and insurance lead AI adoption at 23% of all traffic, while technology and education sectors saw transaction growth exceeding 200%.
5. Agentic AI Is Enabling Machine-Speed Cyberattacks
Autonomous AI agents are emerging as both the next insider threat and a force multiplier for attackers, capable of executing reconnaissance, exploitation, and lateral movement at speeds traditional security tools cannot match. Defenders must now assume attacks can scale and adapt at machine speed—not human speed—with security teams measuring response times in hours defending against threats that compromise systems in minutes.
You Trust Your Organization is Secure. But Can You Verify It?
18,000 Terabytes of Corporate Intelligence, Flowing Outward
The scale of data flowing into AI systems has reached staggering proportions. Enterprise data transfers to AI and machine learning applications surged to 18,033 terabytes in 2025, representing a 93% year-over-year increase. To put that in perspective, that’s roughly equivalent to 3.6 billion digital photographs worth of corporate information being fed into external AI platforms.
Where is all this data going? Tools like Grammarly absorbed 3,615 terabytes of enterprise content, while ChatGPT received 2,021 terabytes. These platforms have essentially become vast repositories of corporate intelligence, holding everything from strategic documents to source code to customer data.
The problem isn’t that employees are using AI tools. The problem is how they’re using them—and what they’re sharing without realizing the implications.
The scale of this risk is quantified by 410 million Data Loss Prevention policy violations tied to ChatGPT alone, including attempts to share Social Security numbers, source code, and medical records.
Four hundred and ten million violations. From one application.
The Shadow AI Problem Nobody Wants to Talk About
Approximately 77% of employees paste data into generative AI tools, and 82% of this activity happens through personal accounts that bypass enterprise oversight entirely. This means the majority of sensitive data transfers are occurring completely outside corporate security controls.
The report findings align with broader industry research showing that 68% of employees use free-tier AI tools like ChatGPT via personal accounts, with 57% inputting sensitive data.
Traditional data loss prevention systems were designed to catch files being uploaded or downloaded. They weren’t built for the copy-paste era, where employees simply highlight text in a confidential document and drop it into a browser-based chatbot running under their personal Gmail account.
The most immediate generative AI-specific risk is the substantial surge in data exposure, with the rate of data policy violations associated with genAI application usage doubling last year.
This isn’t a theoretical concern. It’s measurable, accelerating, and happening across every industry.
Embedded AI: The Threat You Can’t See
Beyond the obvious AI applications like ChatGPT and coding assistants lies a more insidious problem. AI capabilities are now being built directly into everyday enterprise software platforms—often activated by default and operating without explicit user awareness.
These embedded AI features create what security researchers call a “quiet risk multiplier.” They inherit overshared permissions from their host applications, can ingest business content from connected systems, and establish new trust boundaries that are difficult to audit or even detect.
Among all platforms analyzed, Atlassian was a leading source of embedded AI activity, reflecting widespread use of AI-powered features within its core platforms, such as Jira and Confluence.
When your project management tool is quietly summarizing tickets using AI, or your documentation platform is auto-generating content suggestions, sensitive information can flow into AI systems through pathways your security team never even considered.
The result? Many organizations still lack a basic inventory of active AI models and embedded features, leaving them unaware of exactly where sensitive data is exposed.
The 3,400-Application Explosion
The number of applications driving AI and ML transactions quadrupled year-over-year to more than 3,400, increasing complexity and reducing centralized visibility.
This rapid proliferation has left many organizations with no clear map of which AI models are interacting with their data or what supply chains stand behind them. Security teams are essentially playing whack-a-mole against an exponentially growing problem.
Enterprise AI activity rose 91% year-over-year across more than 3,400 applications. Finance and insurance remain the most AI-driven sectors by volume, accounting for 23% of all AI/ML traffic, while technology and education sectors recorded explosive year-over-year transaction growth of 202% and 184% respectively.
Engineering departments represented 48.9% of all AI usage, followed by IT at 31.8% and marketing at 6.9%. The heaviest users are precisely the departments with access to the most sensitive intellectual property and customer data.
When Machines Attack at Machine Speed
Here’s where the conversation shifts from troubling to genuinely alarming.
Moody’s 2026 cyber outlook report warned of escalating AI-driven cyberattacks, including adaptive malware and autonomous threats, as companies increasingly adopt AI without adequate safeguards.
The emergence of “agentic AI”—autonomous systems capable of executing complex tasks without human oversight—is fundamentally changing the threat landscape. According to Palo Alto Networks, AI agents represent the new insider threat to companies in 2026, as agentic AI becomes vulnerable to exploitation and an attractive target for attackers.
We have moved beyond passive chatbots into the age of autonomous agents. This shift fundamentally alters the threat landscape for organizations, transforming AI from a content generator into an active participant in enterprise infrastructure that can execute code and modify data.
Traditional security tools were designed to detect anomalies in human behavior. An AI agent that runs code perfectly 10,000 times in sequence looks normal to these systems. But that agent might be executing an attacker’s commands.
At the peak of a documented AI-orchestrated attack, the AI made thousands of requests, often multiple per second—an attack speed that would have been, for human hackers, simply impossible to match.
Threat actors are using models to generate convincing lures in any language, to mutate payloads for each target, and to mine stolen datasets at a scale that manual tradecraft could never match.
The implications for defenders are stark: you can no longer assume attacks will unfold at human speed. Security teams measuring response times in hours are defending against threats that can compromise systems in minutes.
The 39% Solution (That Isn’t Really a Solution)
Organizations are aware that something needs to be done. The Zscaler research reveals that enterprises blocked approximately 39% of AI/ML transactions due to data exposure, privacy, and compliance concerns.
On one hand, this shows security teams are taking action. On the other hand, it reveals the scale of the problem: if four in ten AI transactions are being flagged as potential risks, something fundamental is broken in how enterprises approach AI adoption.
The tools that employees use most frequently—Grammarly, ChatGPT, Copilot, coding assistants—are also the ones most heavily blocked and most deeply involved in sensitive data interactions. These applications sit directly in daily workflows, making them simultaneously essential and dangerous.
The Regulatory Reckoning
OpenAI was fined €15 million by the Italian Data Protection Authority for training models on personal data without a clear legal basis and failing to implement adequate age verification. The first major enforcement wave under the EU AI Act is intensifying in 2026 as the comprehensive compliance framework for high-risk systems becomes fully enforceable.
This isn’t just a technical problem anymore. It’s a regulatory one. Organizations deploying AI systems without proper governance frameworks face potential fines of up to €35 million or 7% of global annual turnover under the EU AI Act, on top of existing GDPR penalties.
The DLP violations documented in the Zscaler report—attempts to share regulated healthcare data, financial records, and personally identifiable information through AI platforms—represent exactly the kind of activity that triggers compliance investigations.
What Actually Works
The security industry‘s response to this challenge is coalescing around several key principles.
First, visibility. You cannot secure what you cannot see. Organizations need comprehensive inventories of every AI application, embedded feature, and model interacting with their data. This includes the shadow AI tools employees are using through personal accounts.
Second, treating AI traffic as a critical security domain. Most enterprises lack a complete view of the AI applications and services in use, including generative AI tools, AI development environments, embedded AI in SaaS, models, agents, and underlying infrastructure. Traditional perimeter security doesn’t work when the threat is data flowing outward through legitimate channels.
Third, continuous testing. The 16-minute compromise times documented in red team exercises demonstrate that AI systems fail fast under adversarial conditions. This isn’t a one-time penetration test problem—it requires ongoing validation.
Fourth, permission hygiene. AI tools and agents should operate under least-privilege access principles, just like human users. An AI assistant designed to help with scheduling doesn’t need access to your customer database.
Fifth, human-in-the-loop checkpoints for high-stakes decisions. An agent should never be allowed to transfer funds, delete data, or change access control policies without explicit human approval.
The Uncomfortable Truth
AI has transitioned from a productivity tool to a primary vector for autonomous, machine-speed conflict.
That’s not marketing language. That’s what nearly a trillion transactions of data reveal about where we actually are in the AI security story.
Organizations that treat AI security as an afterthought—something to address once the productivity benefits are locked in—are setting themselves up for incidents that unfold faster than their security teams can respond.
The companies that figure out how to govern AI at scale without shutting it down entirely will have a competitive advantage. The ones that don’t will find themselves explaining to regulators, customers, and shareholders how 18,000 terabytes of corporate data ended up in places it was never supposed to go.
As Jay Chaudhry, CEO and Founder of Zscaler noted: “AI is changing how businesses operate, but traditional security approaches were not designed to secure AI. Business leaders are looking for a comprehensive solution—not more point products.”
The window for getting ahead of this problem is closing. Adversaries can now automate the majority of an intrusion with almost no human expertise. Companies that don’t adopt automated, AI-powered defenses will find themselves outpaced by threats that evolve faster than any traditional security model can keep up.
The choice isn’t between adopting AI and staying secure. It’s between adopting AI with proper governance or flying blind into a crisis that’s already arrived.
To learn how Kiteworks can help, schedule a custom demo today.
Frequently Asked Questions
The ThreatLabz 2026 AI Security Report is an annual cybersecurity analysis published by Zscaler on January 27, 2026, examining enterprise AI usage patterns and security vulnerabilities. The report analyzed 989.3 billion AI and machine learning transactions across approximately 9,000 organizations using the Zscaler Zero Trust Exchange platform between January and December 2025.
According to Zscaler’s red team testing, the median time to first critical failure in enterprise AI systems was just 16 minutes, with 90% of systems compromised in under 90 minutes. In the most extreme documented case, security defenses were bypassed in a single second, demonstrating that AI systems fail rapidly under adversarial conditions.
Enterprise data transfers to AI and ML applications reached 18,033 terabytes in 2025, representing a 93% year-over-year increase. Grammarly received 3,615 terabytes of enterprise content while ChatGPT absorbed 2,021 terabytes, making these platforms among the largest repositories of corporate intelligence.
Shadow AI refers to the unauthorized use of generative AI tools through personal accounts that bypass enterprise security controls. Research indicates 77% of employees paste data into generative AI tools, with 82% of this activity occurring through unmanaged personal accounts, creating a massive blind spot for data loss prevention systems and exposing sensitive corporate information without oversight.
Finance and insurance are the most AI-driven sectors by transaction volume, accounting for 23% of all AI/ML traffic observed in the report. Technology and education sectors recorded the fastest growth rates at 202% and 184% year-over-year respectively. Engineering departments represent nearly half of all enterprise AI usage at 48.9%, followed by IT at 31.8%.
Agentic AI refers to autonomous AI systems capable of executing complex tasks without human oversight, including reconnaissance, exploitation, and lateral movement across networks. Security experts warn that agentic AI is emerging as a primary attack vector because it enables cyberattacks to scale and adapt at machine speed, outpacing traditional security tools designed to detect anomalies in human behavior patterns.
Additional Resources
- Blog Post Zero Trust Architecture: Never Trust, Always Verify
- Video Microsoft GCC High: Disadvantages Driving Defense Contractors Toward Smarter Advantages
- Blog Post How to Secure Classified Data Once DSPM Flags It
- Blog Post Building Trust in Generative AI with a Zero Trust Approach
- Video The Definitive Guide to Secure Sensitive Data Storage for IT Leaders