Attackers Aren’t Getting More Creative—They’re Getting Faster
In September 2025, Anthropic disclosed that it had detected and disrupted a cyber-espionage operation attributed to a Chinese state-sponsored group. The operation used AI agents—Claude Code instances running as autonomous orchestrators—to execute roughly 80–90% of the tactical work in an intrusion campaign targeting approximately 30 entities. Human operators stepped in at only a few critical decision points: approving escalation from reconnaissance to exploitation and deciding what to exfiltrate.
Key Takeaways
- Vulnerability exploitation is now the top attack vector—and AI is compressing the timeline. IBM X-Force found that attacks on public-facing applications surged 44%, with many exploited vulnerabilities requiring no authentication at all.
- AI platforms are now a credential harvesting ecosystem. More than 300,000 ChatGPT credentials appeared on the dark web in 2025, giving attackers pathways to manipulate outputs, exfiltrate data, and inject malicious prompts.
- Supply chain compromises have nearly quadrupled since 2020, and AI toolchains are next. Attackers exploit CI/CD pipelines, SaaS integrations, and trusted developer identities—while 72% of organizations can’t even produce a reliable software component inventory.
- Ransomware groups surged 49% while 82% of detections are malware-free. Attackers operate through valid credentials and native tools, making data-layer controls the last line of defense that works.
- Most organizations lack the basic controls to govern AI data access. 63% can’t enforce purpose limitations on AI agents, 60% can’t terminate a misbehaving agent, and only 43% have a centralized AI data gateway
That incident was a preview of the operational reality the IBM X-Force Threat Intelligence Index 2026 now documents at scale. Exploitation of public-facing applications jumped 44% year-over-year. Vulnerability exploitation displaced phishing and stolen credentials as the leading initial access vector, accounting for 40% of incidents X-Force observed in 2025. Many of those exploited vulnerabilities required no authentication whatsoever.
The IBM report’s core finding isn’t that attacks are more sophisticated. It’s that they’re faster. AI tools help attackers scan, identify, and exploit weaknesses in the time it takes most security teams to triage an alert. The average eCrime breakout time—initial access to lateral movement—is now 29 minutes, according to the CrowdStrike 2026 Global Threat Report. The fastest observed breakout: 27 seconds. At that speed, reactive monitoring isn’t a strategy. It’s a post-incident report.
300,000 AI Credentials on the Dark Web—and Most Security Teams Don’t Know
The IBM X-Force data on AI credential theft should unsettle every CISO with AI tools in production. Infostealer malware operators expanded their target lists to include AI chatbot platforms in 2025, leading to more than 300,000 ChatGPT credential sets advertised on the dark web. Password reuse across personal and enterprise accounts turns low-value consumer credentials into high-value enterprise access paths.
This isn’t a hypothetical risk. Compromised AI chatbot accounts create attack vectors that go beyond simple account takeover. Attackers with access to an enterprise employee’s AI credentials can exfiltrate conversation histories containing sensitive data, manipulate AI outputs to influence business decisions, or inject malicious prompts that poison downstream workflows.
The 2026 DTEX/Ponemon Insider Threat Report found that shadow AI is now the top driver of negligent insider incidents, with an average annual insider threat cost of $19.5 million. And 92% of organizations say generative AI has changed how employees share information—yet only 13% have integrated AI into their security strategy. That 79-point gap between awareness and action is where the damage concentrates.
The Kiteworks 2026 Data Security and Compliance Risk Forecast quantifies the control deficit: 30% of organizations cite third-party AI vendor handling as their top security concern, but only 36% have visibility into how those vendors handle data in AI systems. Training data poisoning ranks as the second-highest AI security concern, yet only 22% have pre-training validation in place. Organizations are worried about risks they can’t see—and building AI systems on data pipelines they can’t audit.
Supply Chain Attacks Nearly Quadrupled—and AI Makes the Next Wave Worse
IBM X-Force tracked a nearly fourfold increase in major supply chain and third-party compromises since 2020. Attackers are targeting the environments where software is built and deployed: CI/CD pipelines, open-source package registries, SaaS integrations, and trusted developer identities. One compromised npm maintainer can propagate credential theft to millions of downstream users. One breached SaaS vendor can pivot into customer IAM environments.
The Kiteworks 2026 Forecast found that 72% of organizations cannot produce a reliable inventory of their software components, and 71% lack continuous dependency monitoring. Those numbers are alarming enough for traditional software supply chains. For AI supply chains—where models, training data, embeddings, and inference results flow between organizations—the situation is worse. There is no standard for AI model attestations. Almost no one is tracking model provenance.
The 2026 Black Kite Third-Party Breach Report documented 136 verified third-party breach events affecting 719 named victims and an estimated 26,000 unnamed companies, with a 73-day median disclosure lag between breach occurrence and public disclosure. When attackers are inside your partner’s systems for more than two months before anyone says a word, contractual assurances and vendor questionnaires aren’t a security strategy. They’re a liability artifact.
Organizations running AI workloads and sensitive data exchanges through legacy file sharing and managed file transfer infrastructure—built on decades-old protocols without granular access controls, real-time DLP, or AI-aware policy enforcement—are extending these supply chain risks rather than containing them.
109 Ransomware Groups, Zero Malware—Why the Data Layer Is the Last Line
The IBM X-Force report documents an increasingly fragmented ransomware ecosystem. The number of active extortion groups surged from 73 in 2024 to 109 in 2025—a 49% increase. The share attributed to the top 10 groups dropped by 25%, signaling that smaller, more opportunistic operators are entering the market with lower barriers to entry. Leaked toolkits, shared tactics on underground forums, and AI-powered automation make it easier than ever to launch an attack.
CrowdStrike’s parallel finding sharpens the picture: 82% of all detections in 2025 were malware-free. Attackers rely on identity abuse, legitimate tools, and native system utilities to move through enterprise environments without triggering endpoint detection. They steal credentials. They escalate privileges. They search cloud and SaaS platforms for regulated data and intellectual property.
When the attacker never drops malware, endpoint security alone can’t stop them. When they operate through valid credentials, perimeter controls are irrelevant. The defense that still works is the one attackers have to go through regardless of their technique: the data layer. If every access to sensitive data—whether by a human user, an automated pipeline, or an AI agent—must be authenticated, authorized against policy, and logged, the blast radius of a compromised identity shrinks to the permissions the policy engine grants. Not the permissions the attacker escalates to.
The AI Containment Gap: 63% Can’t Enforce Purpose Limitations on Agents
The IBM X-Force findings on AI-accelerated attacks collide with a governance gap the Kiteworks 2026 Forecast exposes in detail. Organizations are deploying AI agents that access enterprise data at scale—while lacking the most basic containment controls.
The numbers are stark. 63% of organizations cannot enforce purpose limitations on AI agents—meaning an agent authorized to summarize a contract could query an entire database of financial records. 60% cannot quickly terminate a misbehaving agent. 55% cannot isolate AI systems from broader network access. 54% of boards aren’t engaged on AI governance. Only 43% have a centralized AI data gateway.
The February 2026 “Agents of Chaos” study—conducted by 20 researchers from MIT, Harvard, Stanford, CMU, and other institutions—documented these containment failures in live environments. AI agents converted short-lived requests into permanent background processes with no termination condition. They took irreversible actions without recognizing they were exceeding competence boundaries. They had no reliable mechanism for distinguishing between authorized users and attackers. Prompt injection, the study confirmed, is a structural feature of how language models process instructions—not a fixable bug.
The World Economic Forum Global Cybersecurity Outlook 2026 reinforced this trajectory, noting that AI agents can accumulate excessive privileges, be manipulated through design flaws or prompt injections, and propagate errors at scale. Without data-layer governance, the containment controls that matter most—purpose binding, kill switches, network isolation—depend entirely on the AI runtime. And as IBM X-Force demonstrates, runtimes get compromised.
How Kiteworks Governs AI Data Access at the Architecture Level
The IBM X-Force data builds an argument that most organizations already feel in their security operations: AI adoption is accelerating, attack surfaces are expanding, and the controls in place were designed for a pre-AI threat landscape. Closing that gap requires governing the data layer—not the model, not the runtime, not the vendor contract.
Kiteworks delivers this through two purpose-built capabilities. The Kiteworks Secure MCP Server enables AI assistants like Claude and Copilot to securely interact with enterprise data through the industry-standard Model Context Protocol. Every AI operation is authenticated via OAuth 2.0, authorized against RBAC and ABAC policies in real time, and logged in a tamper-evident audit trail. Credentials are stored in the OS keychain—never exposed to the AI model. Rate limiting prevents bulk data extraction even if an AI system is compromised.
The Kiteworks AI Data Gateway provides a secure bridge for programmatic AI workflows, including production RAG pipelines. The gateway delivers zero-trust AI data access: every request verified, every file access policy-evaluated, every interaction tracked and fed to SIEM in real time. AI systems access only the data their policy authorizations permit—nothing more.
Both capabilities feed into the same unified audit trail, giving compliance officers the evidence package that IBM, CrowdStrike, and every regulatory framework now demand: what data was accessed, by which AI system, for which user, at what time, under what policy. When the auditor asks how you control AI access to sensitive data, the answer is a report—not an investigation.
What Organizations Need to Do Before the Next X-Force Report
First, map where AI systems access your sensitive data today—not where you think they access it. The IBM X-Force data shows that attackers exploit what organizations don’t monitor. The Thales report confirms only 33% have complete knowledge of where data is stored. If you don’t know where your data lives, you can’t govern what touches it.
Second, treat AI agent credentials as privileged infrastructure. The 300,000 compromised ChatGPT credentials IBM documented aren’t just an authentication problem—they’re a data exfiltration vector. Apply the same identity governance to AI tools that you apply to admin accounts: MFA, credential rotation, session monitoring, and real-time anomaly detection.
Third, close the AI containment gap before regulators close it for you. The Kiteworks 2026 Forecast found that 63% of organizations lack purpose binding for AI agents and 60% lack kill switches. The EU AI Act‘s high-risk provisions become fully enforceable in August 2026. Build purpose limitations, termination controls, and network isolation into your AI data access architecture now.
Fourth, extend your data governance policies to every AI interaction—not just human users. HIPAA, PCI DSS, CMMC, SOX, and GDPR don’t exempt AI agents. An AI system accessing protected health information, cardholder data, or controlled unclassified information triggers the same compliance obligations as a human employee. The governance architecture must enforce that equivalence automatically.
Fifth, demand audit-quality evidence from your AI data pipelines. The IBM X-Force supply chain data and the Black Kite disclosure lag data make clear that vendor assurances aren’t evidence. Organizations need tamper-evident logs that show exactly what AI accessed, when, under what authorization, and with what outcome. Stated compliance is no longer sufficient—provable control is the new standard.
The IBM X-Force 2026 report documents a threat landscape where speed, identity abuse, and supply chain exploitation define attacker operations. AI amplifies every one of those trends. The organizations that build data-layer governance now—zero-trust AI data access, real-time policy enforcement, unified audit trails—will own the competitive and compliance advantage. The rest will learn the hard way that the data layer is the last line, and they left it unguarded.
Frequently Asked Questions
AI credential theft directly threatens enterprise data security. IBM X-Force found that infostealer malware exposed over 300,000 AI chatbot credentials in 2025, creating pathways to sensitive enterprise data. Organizations deploying AI assistants need zero-trust data access controls that authenticate and authorize every AI request independently, preventing compromised credentials from becoming data exfiltration vectors. The IBM X-Force 2026 report recommends enforcing strong AI authentication and monitoring for abnormal access patterns.
The IBM X-Force supply chain data means manufacturing organizations must govern AI data exchanges with partners at the infrastructure level, not the contract level. Supply chain breaches have quadrupled since 2020, and the Black Kite 2026 Third-Party Breach Report documents a 73-day median disclosure lag. AI models, training data, and inference results flowing through ungoverned channels create the same trusted-path vulnerabilities attackers already exploit.
Shadow AI is now the top driver of negligent insider incidents, according to the DTEX/Ponemon 2026 Insider Threat Report, costing organizations an average of $19.5 million annually. IBM X-Force data shows 82% of attacks are malware-free, meaning shadow AI tools operate outside traditional detection. Organizations need centralized AI data gateways that enforce policy on every AI data request—before regulators and attackers discover the blind spots first.
AI agents accessing protected health information trigger identical HIPAA obligations as human users—including minimum necessary access, audit trails, and breach notification. The Kiteworks 2026 Forecast found that 63% of organizations cannot enforce purpose limitations on AI agents. Healthcare organizations need data-layer governance that authenticates every AI request, enforces ABAC policies restricting access to authorized data only, and produces tamper-evident audit logs for compliance evidence.
The clearest metric is the containment control deficit: 63% of organizations cannot enforce purpose limitations on AI agents, 60% cannot terminate a misbehaving agent, and only 43% have a centralized AI data gateway, per the Kiteworks 2026 Forecast. Pair that with IBM X-Force’s finding that vulnerability exploitation is now the leading attack vector at 40% of incidents, and the message is clear: AI expands the attack surface, and most organizations lack the data-layer controls to contain it.