Custom AI Applications Are Reaching Production Before Security Catches Up

Gartner’s March 2026 forecast puts a number on what security leaders have been sensing for months: by 2028, half of all enterprise cybersecurity incident response work will center on custom-built AI applications. The prediction, published by SecurityBrief, reflects a market where AI-driven software is being pushed into production across business processes and customer-facing services faster than security teams can evaluate, test, or build response procedures around it.

Key Takeaways

  1. Gartner forecasts that by 2028, half of all enterprise cybersecurity incident response efforts will involve custom-built AI applications. Most security teams lack playbooks, detection tools, and containment procedures for AI-specific incidents.
  2. Manual AI compliance processes will expose 75% of regulated organizations to fines exceeding 5% of global revenue by the end of 2027. Organizations still relying on spreadsheets and ad hoc evidence collection are building a regulatory liability that scales with every new AI deployment.
  3. One-third of all IT work through 2030 will be spent remediating “AI data debt” The accumulated weaknesses in datasets that AI systems now depend on. Unstructured, poorly classified, and inconsistently secured data spread across file shares, SaaS platforms, and legacy systems is the foundation on which organizations are building their AI strategies.
  4. Security teams are building AI systems they cannot investigate when something goes wrong. An AI incident can look like a security event, a software defect, a data quality problem, or all three at once — and 60% of organizations lack the anomaly detection tools to tell the difference.
  5. The governance-versus-containment gap means most organizations can watch an AI agent misbehave but cannot stop it. 63% cannot enforce purpose limitations, 60% cannot terminate a misbehaving agent, and 55% cannot isolate AI from sensitive systems

Christopher Mixter, a Gartner VP Analyst, framed the problem in architectural terms: these systems are complex, dynamic, and difficult to secure over time. Custom AI applications change after deployment — models retrain, data pipelines shift, integrations evolve. The security assumptions validated at launch may not hold three months later.

The Kiteworks 2026 Data Security and Compliance Risk Forecast Report confirms this pattern at scale. One hundred percent of surveyed organizations have agentic AI on their roadmap. Yet 60% lack AI-powered anomaly detection, 51% are running manual incident response playbooks, and 52% have not tested their recovery time or recovery point objectives. The foundational capabilities — immutable backups (68%), audit trails (67%) — exist. The AI-specific detection and response capabilities that custom AI deployments demand do not.

AI Incidents Do Not Look Like Traditional Security Events

Incident response teams are trained to work through a familiar sequence: detection, containment, eradication, recovery. AI systems break that model. A failure in a custom AI application might present as a security event — unauthorized data access triggered by a model behaving outside its intended scope. Or it might look like a software defect — an integration failure between the model and a downstream service. Or it might be a data quality problem — a training pipeline that ingested data it should not have. In many cases, it is all three.

This ambiguity is what makes AI incident response fundamentally harder. Traditional incidents have a clear blast radius. AI incidents have a diffuse one. An Agents of Chaos study published in February 2026 by 20 researchers across MIT, Harvard, Stanford, and CMU documented exactly this pattern in live deployments: AI agents disclosed sensitive information, complied with unauthorized requests, and executed actions beyond their intended scope — all without triggering conventional security alerts. The failures were not exploits in the traditional sense. They were emergent behaviors that existing detection tools were not designed to catch.

The Kiteworks Forecast found that government organizations are in the worst position: 76% lack AI anomaly detection and 76% are running manual IR playbooks. Healthcare is close behind — 64% missing AI anomaly detection and 77% not testing recovery capabilities. These are the sectors handling the most sensitive regulated data, and they are the least prepared for the incident type Gartner says will dominate by 2028.

Manual AI Compliance Is a Liability With a Deadline

Gartner’s second forecast is equally direct: by the end of 2027, manual AI compliance processes will expose 75% of regulated organizations to fines exceeding 5% of global revenue. The prediction targets organizations that still manage AI risk through spreadsheets, ad hoc evidence collection, and manual approval workflows — processes that worked tolerably when compliance was periodic and AI was experimental.

Those conditions no longer exist. The EU AI Act is phasing in through 2026, with high-risk system obligations becoming fully enforceable by August 2026. The Colorado AI Act takes effect in 2026. California’s CPPA automated decision-making regulations begin enforcement in January 2027. Each new framework expands the scope of what must be documented, monitored, and reported — and each expects continuous evidence, not quarterly snapshots.

The Kiteworks Forecast quantified the operational gap: 25% of all organizations still use manual or periodic compliance as their primary approach. In government, 38% rely on manual compliance processes. Healthcare sits at 32%. These are the organizations most likely to be caught by Gartner’s forecast — not because they ignored compliance, but because their compliance infrastructure cannot keep pace with their AI deployment velocity.

AI Data Debt: The Hidden Infrastructure Problem

Through 2030, Gartner forecasts that 33% of IT work will be spent remediating what it calls “AI data debt” — weaknesses in the underlying datasets that organizations rely on for AI systems. The term covers unstructured, poorly classified, and inconsistently secured data spread across file shares, SaaS platforms, and legacy systems.

This is the foundation problem. AI applications are only as governed as the data they access. When data classification is incomplete, access controls are inconsistent, and retention policies are unenforced, every AI system built on that foundation inherits the same vulnerabilities.

The Kiteworks Forecast documents this at the control level. Sixty-one percent of organizations cannot enforce consistent data tagging across their systems. Seventy-eight percent cannot validate data before it enters AI training pipelines. Fifty-three percent cannot recover training data after an incident. The 2026 Thales Data Threat Report adds another dimension: Only 33% of organizations report complete knowledge of where their data resides. When two-thirds of organizations do not know where their data is, AI data debt is not a future risk. It is a current one.

Data loss prevention (DLP) programs are expanding to cover AI-driven data flows, but the expansion is hitting architecture limits. Monitoring requests made by generative AI tools and agentic AI systems that retrieve information from multiple sources requires a governance layer that operates at the data layer — not the model layer, not the application layer. Traditional DLP was designed for humans sending files, not for AI agents making API calls across data systems. The Black Kite 2026 Third-Party Breach Report documented 136 verified third-party breach events in 2025 alone — and as AI systems expand the number of automated connections to internal and partner data stores, that attack surface compounds.

Sovereignty, Identity, and the Expanding AI Attack Surface

Gartner’s remaining forecasts round out a picture of converging pressures. By 2027, 30% of organizations will require comprehensive sovereignty of their cloud security controls — driven by geopolitical turbulence and regulatory demands around where data resides, who can access it, and how security is administered across borders. By 2028, 70% of CISOs will deploy identity visibility and intelligence capabilities to shrink the identity and access management attack surface.

Both forecasts connect directly to AI security. The Kiteworks 2026 Data Sovereignty Report found that one in three organizations reported a data sovereignty incident in the past 12 months. Twenty-nine percent of organizations in the Kiteworks Forecast cite cross-border AI data exposure as a risk — but only 36% have visibility into where AI systems actually process data. Storage sovereignty is not enough when AI processing happens in a different jurisdiction.

Identity is equally exposed. AI agents create a new class of machine identity that existing IAM tools were not designed to manage. The CrowdStrike 2026 Global Threat Report documented that 82% of detections are now malware-free — attackers operate through valid credentials and native tools. When AI agents also operate through valid credentials and native tools, the line between legitimate automated behavior and credential-based attack becomes extraordinarily difficult to draw without purpose-built AI identity governance.

Gartner also predicts that more than 50% of enterprises will adopt AI security platforms by 2028 to manage both third-party AI services and custom-built applications. The demand is driven by prompt injection attacks, data misuse, and inconsistent controls when different business units deploy different AI services without centralized oversight. Security leaders should evaluate whether their tooling covers both in-house and external AI use — including visibility into AI activity and policy enforcement across all deployment patterns. The DTEX 2026 Insider Threat Report reinforces the urgency: shadow AI is now the top driver of negligent insider incidents, yet only 13% of organizations have integrated AI into their security strategy.

How Kiteworks Addresses Custom AI Security and Compliance Gaps

The Gartner forecasts describe a market where AI systems are deployed faster than they can be secured, investigated, or governed through manual processes. Kiteworks addresses these gaps architecturally — not through another layer of monitoring, but through data-layer governance that operates independently of which AI model, framework, or agent is deployed.

For AI incident response, Kiteworks captures a tamper-evident audit trail of every AI agent interaction with sensitive data — who authorized the agent, which data was accessed, under what policy, and when. When an incident occurs, investigators do not need to reconstruct what happened from fragmented logs across five systems. The evidence is already compiled, structured, and exportable.

For AI compliance automation, Kiteworks replaces manual review gates with continuous governance. Every AI agent workflow inherits compliance controls automatically — attribute-based access control (ABAC), FIPS 140-3 validated encryption, and purpose binding that limits what agents are authorized to do. Pre-built compliance dashboards map directly to HIPAA, CMMC, GDPR, PCI DSS, and SOX frameworks, converting the periodic audit scramble into continuous evidence generation.

For AI data debt, Kiteworks operates as the control plane for secure data exchange — one policy engine, one audit log, one security architecture across email, file sharing, SFTP, managed file transfer, APIs, data forms, and AI integrations via its Secure MCP Server. Data classification and access controls enforce consistently across every channel, closing the gap between “we know where our data is” and “we control how AI accesses it.”

What Security Leaders Should Prioritize Before 2028

First, build AI-specific incident response playbooks now. The Kiteworks Forecast found that 51% of organizations are still running manual IR playbooks and 89% have never practiced incident response with third-party partners. An AI incident that spans model behavior, data handling, and service integration cannot be investigated using a traditional breach playbook.

Second, automate AI compliance evidence collection. Gartner’s forecast that manual compliance will expose 75% of regulated organizations to major fines is a deadline, not a prediction. Deploy platforms that produce continuous, tamper-evident compliance evidence — not quarterly audit binders.

Third, inventory and classify the data your AI systems access. The Kiteworks Forecast found that 61% of organizations cannot enforce consistent data tagging. You cannot govern AI data access when you do not know what the data is or where it lives.

Fourth, deploy data-layer governance for all AI integrations. Model-layer guardrails and system prompts are not compliance controls. The Kiteworks Forecast documented that 63% of organizations cannot enforce purpose limitations on AI agents. Data-layer governance — identity verification, ABAC policy enforcement, and evidence-quality logging — is the only approach that scales across models, frameworks, and deployment patterns.

Fifth, extend sovereignty controls to AI processing, not just AI storage. Gartner forecasts that 30% of organizations will require comprehensive cloud security sovereignty by 2027. The Kiteworks Data Sovereignty Report found that most organizations have not extended sovereignty controls beyond storage — leaving AI processing as an unmonitored cross-border exposure.

Gartner’s forecasts describe a two-year window. The organizations that use it to build AI-aware incident response, automated compliance infrastructure, and data-layer governance will be prepared for a world where half of all cyber incidents involve custom AI. The organizations that do not will discover their gaps during the incident — which is the most expensive way to learn.

Frequently Asked Questions

AI cybersecurity incidents blend security events, software defects, and data quality problems in ways traditional IR playbooks cannot isolate. A model may access unauthorized data, produce flawed outputs, or behave unpredictably after retraining — none of which trigger conventional alerts. Gartner forecasts that by 2028, half of enterprise IR will involve custom AI. The Kiteworks Forecast found 60% lack AI anomaly detection today.

Manual AI compliance processes for regulated financial services firms create fine exposure because regulators now expect continuous, evidence-quality documentation of AI data access, not periodic spreadsheet audits. Gartner forecasts 75% of regulated organizations face fines exceeding 5% of revenue by 2027 from manual approaches. The EU AI Act requires structured risk management for high-risk AI in financial services, with penalties reaching €35 million or 7% of global turnover.

AI data debt refers to accumulated weaknesses in datasets AI systems depend on — unclassified, poorly secured, or inconsistently governed data spread across file shares, SaaS platforms, and legacy systems. The Kiteworks Forecast found 61% cannot enforce consistent data tagging and 78% cannot validate data entering training pipelines. Gartner projects 33% of IT work through 2030 will remediate this debt as AI expands access to internal data stores.

Enforcing agentic AI governance requires data-layer controls that operate independently of the model or framework. The Kiteworks Forecast found 63% lack purpose binding and 60% lack kill switches for AI agents. Kiteworks addresses this with attribute-based access control at the data layer, enforcing purpose limitations, time-bound permissions, and tamper-evident logging for every agent interaction regardless of AI platform.

Cloud security sovereignty for custom AI applications extends beyond storage location to processing jurisdiction. Gartner predicts 30% of organizations will require comprehensive sovereignty of cloud security controls by 2027. The Kiteworks Data Sovereignty Report found one in three organizations experienced a sovereignty incident in the past year, and only 36% have visibility into where AI systems actually process data. Single-tenant deployment with geographic access restrictions addresses this gap.

Get started.

It’s easy to start ensuring regulatory compliance and effectively managing risk with Kiteworks. Join the thousands of organizations who are confident in how they exchange private data between people, machines, and systems. Get started today.

Table of Content
Share
Tweet
Share
Explore Kiteworks