Attackers Operate at Machine Speed — Defenders Still Run on Human Timelines
The Booz Allen Hamilton report released on March 16, 2026, frames a problem that most security leaders sense but few have quantified: The time gap between AI-speed attacks and human-speed defense is not narrowing. It is widening.
Key Takeaways
- Threat actors have adopted AI for offensive operations faster than governments and enterprises have adopted it for defense, creating what Booz Allen calls the "cybersecurity speed gap." Attacks that once unfolded over days now cause operational impact in minutes.
- A single operator using agentic AI tooling can now run reconnaissance, exploitation, and follow-on actions across dozens of targets simultaneously. Capabilities that once required large, coordinated specialist teams now require one person with an API key.
- AI platforms themselves are becoming high-value attack surfaces — concentrating sensitive data, identity systems, and workflow authority in a single place. When these platforms are misconfigured or compromised, attackers gain direct reach into the systems organizations depend on to operate.
- Manual cybersecurity operations cannot keep pace with AI-speed attacks, yet most organizations still run incident response on human timelines. CISA gives defenders 15 days to patch critical vulnerabilities; HexStrike exploited 8,000+ endpoints in under 10 minutes.
- Closing the speed gap requires three decisions: move cyber defense to AI speed, secure AI platforms as critical infrastructure, and adopt a human-AI teaming model. Organizations that do not make these shifts will detect intrusions only after attackers have already established control.
The report, covered by CyberScoop, documents a timeline of AI-enabled offensive operations from 2023 through early 2026 that shows a clear inflection point. In August 2025, the open-source HexStrike framework weaponized a Citrix NetScaler vulnerability and exploited 8,000+ endpoints in under 10 minutes. In September 2025, Anthropic reported a Chinese state-sponsored campaign that used jailbroken Claude Code to autonomously execute a complete cyber kill chain against 30 global targets — with AI handling 80–90% of the tactical work. In January 2026, a security researcher demonstrated that commercial language models could generate complete, working exploit chains for zero-day vulnerabilities at an average cost of $50 per run.
The CrowdStrike 2026 Global Threat Report corroborates the timeline: Average eCrime breakout time dropped to 29 minutes in 2025, down 65% year-over-year, with the fastest observed breakout at 27 seconds. AI-enabled adversary attacks increased 89% year-over-year. Meanwhile, CISA still operates on a 15-day remediation timeline for critical vulnerabilities — and the Booz Allen report found that 60% of those critical vulnerabilities remain unmitigated even after that window closes. That is not a gap. It is a chasm.
Brad Medairy, Booz Allen’s EVP for National Cyber Business, framed the risk in operational terms: Once an attacker exploits a perimeter vulnerability and gets inside the wall, they move at machine speed. Defenders still operating at human speed are not just slower. They are watching the intrusion happen.
Two Models of AI-Enabled Attack — and Why the Second Changes Everything
The Booz Allen report identifies two distinct patterns in how malicious actors use AI. The first is the collaborator model: An operator works interactively with a language model to write scripts, debug code, and adapt tools when something fails. This is an efficiency multiplier — it accelerates what attackers already do while keeping the human in the loop on key decisions.
The second is the orchestration model, and it is fundamentally different. An operator connects an AI system to offensive security tools, points it at a target, sets parameters, and walks away. The system chooses its own tools, runs actions, reads results, and iterates until it reaches the objective. Brad Medairy, Booz Allen’s EVP for National Cyber Business, compared it to the AI equivalent of vibe coding — define the goal, set the constraints, and let the agent work the problem.
The orchestration model is what makes the HexStrike and Claude Code incidents so consequential. These are not sophisticated nation-state teams using AI to shave hours off a manual process. They are automated systems running complete offensive workflows with minimal human direction. The Claude Code campaign documented by Anthropic in November 2025 is particularly instructive: jailbroken AI agents independently selected targets, generated exploits, executed intrusions, exfiltrated data, and installed persistence — all without real-time human direction. Human operators stepped in at only four to six decision points across the entire campaign. The Kiteworks Forecast noted a defensive insight from that same incident: The AI sometimes overstated findings or fabricated data, forcing attackers to validate results, which slowed the campaign. That unreliability is the thinnest of silver linings — and it will not last as models improve.
The Agents of Chaos study published in February 2026 — conducted by 20 researchers from MIT, Harvard, Stanford, and CMU — documented the structural deficits that make this possible: AI agents have no reliable mechanism for distinguishing authorized users from attackers, no internal model of their own competence boundaries, and no way to prevent cross-agent propagation of compromised instructions.
The Kiteworks 2026 Data Security and Compliance Risk Forecast Report puts a number on the defender side of this equation: 63% of organizations cannot enforce purpose limitations on AI agents, 60% cannot terminate a misbehaving agent, and 55% cannot isolate AI systems from broader network access. Attackers are building autonomous offensive agents. Most defenders cannot even constrain their own.
AI Platforms Are the New Critical Infrastructure — and the Newest Attack Surface
The Booz Allen report makes an argument that extends beyond traditional threat analysis: AI platforms themselves have become critical infrastructure. These systems concentrate sensitive data, connect to email and ticketing systems, integrate with code repositories, and trigger actions through plugins, agents, and automated workflows. When they are compromised, attackers gain direct access to the highest-trust parts of the enterprise.
The documented cases are specific. XLab reported Pickai malware spreading through vulnerabilities in ComfyUI, an AI workflow tool, affecting nearly 700 servers. Microsoft Incident Response documented attackers using the OpenAI Assistants API to pass instructions and receive results as a command-and-control channel. Public repositories have been used to distribute malicious AI packages with polished, AI-written documentation to appear legitimate — Sonatype reported nearly 400,000 new open-source malware packages in Q4 2025, with 89% attributed to scripted and AI-assisted publishing by a single campaign.
The risk compounds when no single team owns AI security end to end. One team runs models and workflows. Another manages access and logs. A third manages vendors. The Kiteworks Forecast found that 57% of organizations lack a centralized AI data gateway and 33% of government organizations have no dedicated AI controls at all. The Black Kite 2026 Third-Party Breach Report documented 136 verified third-party breach events in 2025, affecting 719 named victims and roughly 26,000 unnamed companies — with a 73-day median disclosure lag. When AI platforms connect to internal systems, partner APIs, and supply chain workflows, every one of those connections becomes a potential propagation path for a compromised AI agent.
The Booz Allen report’s prescription — treat AI platforms with enforceable security baselines for access, logging, integrations, and data handling — is a direct acknowledgment that voluntary guidance does not match the risk these platforms introduce.
The Speed Gap Is Also a Data Governance Gap
Beneath the speed problem sits a data problem. AI-enabled attackers are not just faster at breaking in — they are faster at finding and extracting what matters. The shift from malware-based intrusions to identity-based, credential-driven operations means attackers operate through legitimate accounts, access legitimate systems, and exfiltrate data through legitimate channels. The CrowdStrike report documented that 82% of detections in 2025 were malware-free.
This changes what “defense” means at the data layer. Traditional perimeter security, endpoint detection, and signature-based tools are designed to spot malicious files. They are not designed to spot malicious behavior through trusted accounts accessing sensitive data at machine speed.
The 2026 Thales Data Threat Report found that only 33% of organizations have complete knowledge of where their data resides. The Kiteworks Forecast found that 33% lack evidence-quality audit trails entirely and 61% have fragmented logs across systems — logs that cannot produce actionable forensic evidence during an AI-speed incident. The DTEX 2026 Insider Threat Report adds another dimension: Shadow AI is now the top driver of negligent insider incidents, yet only 13% of organizations have integrated AI into their security strategy.
When attackers move through legitimate accounts at machine speed and defenders cannot even audit what their own AI systems access, the speed gap becomes a visibility gap — and the visibility gap becomes a compliance gap.
The WEF Global Cybersecurity Outlook 2026 reinforces this convergence: 73% of respondents reported that they or someone in their network had been personally affected by cyber-enabled fraud in 2025, and CEOs now rank cyber-enabled fraud and AI vulnerabilities as their top two concerns — displacing ransomware for the first time. The Booz Allen report connects the dots: When AI scales deception to industrial levels and attackers operate through trusted identities, security becomes inseparable from data governance. Organizations that cannot prove what data was accessed, by whom, under what policy, and whether the accessor was human or machine will fail both the incident investigation and the regulatory audit that follows.
How Kiteworks Closes the Gap Between AI-Speed Attacks and Data-Layer Defense
The Booz Allen report’s three recommendations — move defense to AI speed, secure AI platforms as critical infrastructure, and adopt human-AI teaming — all converge on a single architectural requirement: The data layer must be governed independently of the model, the agent, and the user.
Kiteworks operates as the control plane for secure data exchange, providing unified governance across every channel where sensitive data moves — email, file sharing, SFTP, managed file transfer, APIs, data forms, and AI integrations via its Secure MCP Server. This is not another monitoring layer. It is the enforcement layer.
For AI-speed containment, Kiteworks captures a tamper-evident audit trail of every interaction with sensitive data — human or AI agent — in real time, feeding directly into SIEM infrastructure with zero throttling and zero delay. When an incident unfolds in minutes, investigators have the evidence chain already assembled, not scattered across five systems with 72-hour log delays.
For AI platform security, Kiteworks enforces attribute-based access control (ABAC) at the data layer, ensuring that every AI agent request is authenticated, authorized against a multi-dimensional policy, encrypted with FIPS 140-3 validated encryption, and logged with full delegation chain attribution. Purpose binding limits what agents are authorized to do. Kill-switch capability enables rapid termination. Single-tenant isolation prevents cross-tenant vulnerability exploitation.
For human-AI teaming, Kiteworks replaces the manual compliance review gates that block AI deployment with continuous, automated governance. AI projects deploy at speed because compliance is built into the architecture — not bolted on as a periodic approval checkpoint.
Five Shifts Security Leaders Must Make Before the Speed Gap Closes on Them
First, preapprove automated containment actions for AI-speed incidents. The Booz Allen report is explicit: Waiting for manual approval during an intrusion is too slow. Define in advance which actions — host isolation, traffic blocking, session revocation, privilege freezing — can execute automatically within defined thresholds. Test those decisions through tabletop exercises before the incident forces them.
Second, establish enforceable security baselines for every AI platform in production. The Kiteworks Forecast found that 57% of organizations lack a centralized AI data gateway. Every automated workflow should have its own identity and access controls, zero-trust policies limiting what data and systems it can reach, and detailed logging of every tool call, key event, and performance signal.
Third, unify your audit trail infrastructure across all data exchange channels. The Kiteworks Forecast found 61% of organizations have fragmented logs that are not actionable. When breakout time is 29 minutes and your logs arrive 72 hours later, you are investigating a crime scene, not containing an attack.
Fourth, deploy data-layer governance for all AI integrations — not model-layer guardrails. System prompts are not compliance controls. They are instructions that can be bypassed by prompt injection, model updates, or indirect manipulation. Only data-layer enforcement — independent of the model — constitutes an audit-defensible control.
Fifth, adopt a human-AI teaming model for security operations. The Booz Allen report estimates that this approach can expand a security team’s capacity by 10 to 100 times. Automated agents handle routine triage, detection updates, and first containment. Human analysts supervise, refine detection logic, and intervene where judgment or broader context is required.
The Booz Allen report’s conclusion is stark: The question is no longer whether organizations will face AI-enabled intrusions. They already do. The question is whether defenders can act in time — or only after the damage is done.
Frequently Asked Questions
The Booz Allen 2026 report documents that AI-enabled attackers now move from discovery to operational impact in minutes, while manual SOC triage and approval chains run on timelines measured in hours or days. HexStrike exploited 8,000+ endpoints in under 10 minutes. The CrowdStrike 2026 Global Threat Report confirms average breakout time dropped to 29 minutes. Containment must start automatically within preapproved thresholds.
AI platforms concentrate sensitive data, identity systems, and workflow authority, making them high-value targets. The Booz Allen report documented attackers using the OpenAI Assistants API as a command-and-control channel and malware spreading through AI workflow tool vulnerabilities. The Kiteworks Forecast found 57% lack a centralized AI data gateway and 33% of government organizations have no dedicated AI controls.
The AI cybersecurity speed gap measures the time difference between AI-speed attack execution and human-speed defense response. The Booz Allen report shows CISA gives defenders 15 days to patch critical vulnerabilities while HexStrike weaponized a CVE in under 10 minutes. The CrowdStrike report documents 29-minute average breakout time and 82% malware-free detections, meaning traditional tools miss most attacks entirely.
Kiteworks closes the AI speed gap for regulated data by enforcing attribute-based access control at the data layer, capturing tamper-evident audit trails in real time with zero throttling, and applying FIPS 140-3 validated encryption to every interaction. Pre-built compliance dashboards map to HIPAA, CMMC, GDPR, and PCI DSS. The Kiteworks Forecast found 63% lack AI purpose binding — Kiteworks enforces it architecturally.
The human-AI teaming model deploys automated AI agents for routine alert triage, detection rule updates, and first containment actions while human analysts supervise, refine logic, and handle complex investigations. The Booz Allen report estimates this approach expands a security team’s capacity by 10–100x. Kiteworks supports this model with automated governance — continuous audit trails and policy enforcement that remove manual compliance review gates.