It’s Not the AI That’s Unsafe. It’s the Access You’re Giving It.
There is a number in Teleport’s 2026 State of AI in Enterprise Infrastructure Security report that should stop every CISO in their tracks: 4.5 times.
Organizations with over-privileged AI systems are 4.5 times more likely to experience security incidents than those with least-privilege controls. Not twice as likely. Not moderately more exposed. Four and a half times. The report calls this the single most predictive factor for AI-related incidents — more predictive than the industry the organization operates in, its maturity level, or its stated confidence in AI security.
Teleport CEO Ev Kontsevoy put the finding in direct terms: the data is clear. It is not the AI that is unsafe. It is the access we are giving it.
That statement reframes the entire AI security conversation. The risk is not that AI systems are inherently dangerous. The risk is that organizations are deploying AI with access rights that no human in an equivalent role would receive — and then discovering, through incidents, that over-provisioned access at machine speed produces consequences at machine scale.
5 Key Takeaways
- Over-Privileged AI Is the Single Most Predictive Factor for Security Incidents. Organizations where AI systems have more access rights than needed experience a 76% incident rate. Organizations with least-privilege controls in place report just 17%. That is a 4.5 times difference — and the Teleport report calls it the single most predictive factor for AI-related incidents, more predictive than industry, maturity level, or stated confidence. The AI is not the problem. The access organizations are giving it is.
- 70% of AI Systems Have More Access Than a Human in the Same Role Would Get. Nearly three-quarters of respondents said their AI systems have more access rights than a human performing equivalent tasks would receive. A fifth said their AI gets significantly more. This is not a marginal over-provisioning. It is a structural failure in how organizations extend access to non-human identities — one that creates the exact conditions for the incidents the report documents.
- 85% of Security Leaders Are Worried About AI Risks — Based on Real Experience, Not Theory. This is not hypothetical concern. A third of respondents confirmed at least one AI-related incident. Another 24% suspect one may have occurred. The worry is grounded in operational reality: organizations are seeing AI deliver genuine productivity gains — faster incident investigation (66%), better documentation (71%), improved engineering output (65%) — while simultaneously experiencing the security consequences of deploying AI with excessive access.
- Static Credentials Are Fueling AI Privilege Creep. Passwords, API keys, and long-lived tokens are the mechanism through which AI systems accumulate excessive access. Organizations with high reliance on static credentials report a 67% incident rate, compared to 47% for those with low reliance. Static credentials do not expire, do not adapt, and do not enforce context-aware access decisions. They are the credential model of 2015 being applied to the AI infrastructure of 2026.
- 64% of Organizations Have No Formal Governance Controls for AI Infrastructure. Forty-three percent of organizations reported having no formal governance controls in place for AI infrastructure. Another 21% reported having no controls at all. Combined, that is nearly two-thirds of organizations deploying AI into infrastructure without the governance structures needed to prevent over-privilege, detect incidents, or demonstrate compliance. The AI is already deployed. The controls are not.
The Access Problem in Numbers
Teleport polled over 200 US infrastructure security leaders to compile the report, defining AI in infrastructure as AI-powered workloads, agentic systems, machine-to-machine communication, ChatOps, compliance automation, and incident detection. The findings paint a picture of an industry that has embraced AI’s productivity benefits while failing to govern the access those AI systems require.
Seventy percent of respondents said their AI systems have more access rights than a human in the same role would get. Nineteen percent said their AI receives significantly more access. When AI systems are over-privileged, the incident rate reaches 76%. When least-privilege controls are in place, it drops to 17%. The difference is not incremental. It is categorical.
Static credentials — passwords, API keys, and long-lived tokens — are the primary mechanism through which this over-privilege occurs. Organizations with high reliance on static credentials report a 67% incident rate, compared to 47% for those with low reliance. Static credentials do not expire on their own. They do not adapt to context. They do not enforce purpose limitations. Once issued, they grant persistent access until someone remembers to revoke them.
The governance gap compounds the problem. Forty-three percent of organizations have no formal governance controls for AI infrastructure. Another 21% have no controls at all. That means nearly two-thirds of organizations are deploying AI into their infrastructure environment with no structured approach to managing the access those systems receive. The AI systems are in production. The governance frameworks are still in planning.
You Trust Your Organization is Secure. But Can You Verify It?
Infrastructure Identity Management Is Necessary. It Is Not Sufficient.
The Teleport report correctly identifies that identity management needs to evolve for AI. Sixty-nine percent of security leaders agree. But the evolution required is more fundamental than the report’s recommendations suggest.
Traditional identity management answers the question: who or what is this entity, and what systems can it access? For AI agents, this means managing authentication to servers, APIs, databases, and cloud infrastructure. Platforms like Teleport address this layer. They manage AI agent identities, issue credentials, and control access to infrastructure resources.
But here is the gap: an AI agent with valid infrastructure credentials and proper authentication can still access data it should not. Infrastructure access and data access are not the same thing. An AI agent authenticated to a database server can, by default, query every table and every record in that database. An AI agent with API access to a file repository can, by default, read every file. The infrastructure identity is verified. The data access is ungoverned.
This is where the 4.5 times multiplier originates. Over-privileged AI does not mean AI with stolen credentials or compromised identities. It means AI with legitimate access to infrastructure that contains data far beyond what the AI’s function requires. The incident does not start with an authentication failure. It starts with an authorization that is too broad.
Closing this gap requires data-centric access controls that operate independently of infrastructure identity. Even when an AI agent has valid credentials to a system, data-centric controls enforce what data within that system the agent can access, for what purpose, and under what conditions. This is the distinction between managing who the AI is and governing what the AI can do with the data it reaches.
What Data-Centric Controls for AI Actually Look Like
Preventing over-privileged AI requires controls that operate at the data layer, not just the infrastructure layer. Here is what that means in practice.
Least-privilege data access by default means that an AI agent authenticated to a system can only access the specific data classifications required for its function. An incident detection AI can access system logs but not customer PII. A compliance automation AI can analyze audit trail metadata but not the underlying sensitive records. A ChatOps bot can retrieve operational status data but not financial records. The access boundary is drawn at the data level, not the system level.
Purpose binding restricts AI agents to specific data repositories and use cases. The AI’s access is not defined by what systems it can reach. It is defined by what it is supposed to do. If an incident detection AI suddenly begins querying customer financial records, that access is blocked and flagged — regardless of whether the AI has valid infrastructure credentials to the system hosting those records.
Continuous verification means every AI data request is re-evaluated against current policies. Not authenticate once, access all data forever. Every query, every retrieval, every interaction with enterprise data is checked against the AI’s authorized purpose, the data classification being requested, and the current risk context. This prevents the privilege creep that occurs when AI systems accumulate access over time without corresponding governance.
Anomaly detection monitors AI agent behavior to identify when access patterns deviate from established baselines. When an AI agent that normally processes 50 records per hour suddenly requests 5,000, or when a compliance automation tool begins accessing data categories outside its defined scope, the deviation triggers automated alerting and can invoke kill switches that immediately revoke the agent’s data access.
Comprehensive audit trails document every AI interaction with enterprise data — what was accessed, when, by which AI agent, under whose authorization, and what actions were taken. These trails are not just for compliance reporting. They are the forensic foundation that enables the faster incident investigation (66%) and better documentation (71%) that the Teleport report identifies as key AI benefits. You cannot investigate an AI incident you cannot see.
Applying Data-Centric Controls Across AI Infrastructure Use Cases
The Teleport report defines AI infrastructure broadly: AI-powered workloads, agentic systems, machine-to-machine communication, ChatOps, compliance automation, and incident detection. Each of these use cases creates specific data access requirements — and specific over-privilege risks.
AI-powered workloads need access to enterprise data repositories to function. Data-centric controls create a secure bridge between the workload and the data, enforcing zero-trust access policies that limit the workload to the data classifications its function requires. The workload runs at machine speed. The access controls operate at the same speed.
Agentic AI systems — autonomous agents that can execute multi-step processes, interact with APIs, and make operational decisions — present the highest over-privilege risk because they act on data rather than simply analyzing it. Each agent creates a non-human identity that requires authentication, authorization, and continuous monitoring. Data-centric controls ensure agentic AI inherits the permissions of its authorizing human principal and cannot escalate beyond those boundaries.
Machine-to-machine communication and ChatOps create data exchange channels that operate without direct human oversight for each interaction. Data-centric controls ensure these channels respect data access policies regardless of the speed or volume of requests. When a Slack bot or Teams bot accesses enterprise data on behalf of a user, user context awareness ensures the bot inherits that user’s data permissions — not the broad system-level access the bot’s service account may hold.
Compliance automation and incident detection are the use cases where over-privilege is most commonly rationalized. The argument is that these systems need broad access to function effectively. The reality is that they need access to audit trail metadata, aggregated metrics, and system logs — not raw access to the underlying sensitive data. A compliance automation tool can analyze data access patterns without reading the patient records those patterns describe. An incident detection system can identify anomalous behavior without downloading the financial data the behavior involves.
The Static Credential Problem Is a Data Problem
The Teleport report identifies static credentials — passwords, API keys, and long-lived tokens — as a primary driver of over-privileged AI. Organizations with high reliance on static credentials report a 67% incident rate. The report recommends reducing reliance on static credentials as a key mitigation.
This recommendation is correct but incomplete. Replacing static credentials with dynamic, short-lived credentials addresses the infrastructure authentication problem. It ensures AI agents prove their identity more frequently and that credentials expire before they can be misused. But dynamic credentials alone do not solve the data access problem. An AI agent with a freshly issued, properly scoped infrastructure token can still access every data record in the system that token grants access to.
The complete solution requires layering data-centric controls on top of improved credential management. Dynamic credentials govern infrastructure access. Data classification and purpose binding govern what data the AI can reach within that infrastructure. Continuous verification ensures the data access policies are enforced on every request. Audit trails document the full chain — from credential issuance to infrastructure access to data retrieval — creating the visibility needed to detect and investigate incidents.
This layered approach — infrastructure identity management plus data-centric access controls — is what reduces the 4.5 times incident multiplier. Neither layer alone is sufficient. Infrastructure identity without data controls leaves the over-privilege gap at the data layer. Data controls without infrastructure identity leave the authentication gap at the system layer. Defence in depth requires both.
What Security Leaders Should Do Now
Implement least-privilege data access for every AI system currently deployed. Audit what data each AI agent can access today. Compare that access to what the agent actually needs for its defined function. Restrict access to the minimum data classifications required. This single action addresses the factor that the report identifies as the most predictive of incidents.
Deploy data-centric access controls that operate independently of infrastructure identity. Even when AI agents have valid infrastructure credentials, data-centric controls should enforce what data those agents can access, for what purpose, and under what conditions. Purpose binding, attribute-based access control, and continuous verification at the data layer close the gap that infrastructure identity management alone cannot.
Replace static credentials with dynamic, short-lived credentials — and layer data controls on top. Eliminate passwords, long-lived API keys, and persistent tokens for AI systems. Move to dynamic credential issuance with automatic expiration. But recognize that credential reform alone does not solve the data access problem. Layer data classification, purpose binding, and continuous verification to govern what AI does with the access its credentials provide.
Build governance frameworks before the next AI system goes into production. Sixty-four percent of organizations have no formal governance controls for AI infrastructure. Every new AI deployment without governance widens the over-privilege gap. Establish clear policies for AI data access, define escalation paths for AI-related incidents, and assign explicit ownership for AI risk management. The governance framework should be in place before the AI system receives its first credential.
Deploy anomaly detection and kill switches for AI data access. Monitor AI agent behavior continuously against established baselines. When access patterns deviate — unusual data volumes, new data categories, off-hours activity — trigger automated alerting and have the ability to revoke AI data access immediately. The 4.5 times incident multiplier means that detecting over-privileged AI behavior early is worth more than any other single control.
Build audit trails that document the full AI data access chain. Every AI interaction with enterprise data should be logged with the AI agent identity, the authorizing human principal, the data classification accessed, the timestamp, and the action taken. These trails support the faster incident investigation and better documentation that the report identifies as key AI benefits — while providing the forensic evidence needed when incidents occur and the compliance documentation needed when regulators ask.
The 4.5x Multiplier Is a Choice, Not an Inevitability
The Teleport report delivers a finding that is as uncomfortable as it is clarifying: the organizations experiencing the most AI security incidents are not the ones deploying the most AI. They are the ones deploying AI with the most access. Over-privilege, not adoption, is the risk factor.
This means the 4.5 times incident multiplier is a choice. Organizations that implement least-privilege data access, deploy data-centric controls alongside infrastructure identity management, replace static credentials, build governance frameworks, and monitor AI behavior continuously will operate in the 17% incident rate territory. Organizations that deploy AI with broad access, static credentials, and no governance will operate in the 76% territory.
The productivity benefits are real. Faster incident investigation. Better documentation. Improved engineering output. No organization should forgo these gains. The Kiteworks Private Data Network provides the data-centric governance layer — purpose binding, continuous verification, immutable audit trails, and anomaly detection — that infrastructure identity platforms alone cannot. But capturing them without accepting 4.5 times higher incident rates requires a fundamental shift in how organizations think about AI access: from the infrastructure layer to the data layer, from static credentials to continuous verification, and from governance as an afterthought to governance as a prerequisite.
The AI is not unsafe. The access you are giving it is. Fix the access, and you fix the risk.
To learn how Kiteworks can help, schedule a custom demo today.
Frequently Asked Questions
Infrastructure access controls which systems, APIs, and servers an AI agent can reach — managed through identity and access management platforms that authenticate agents and issue credentials. Data access controls what information the agent can retrieve within those systems. The critical gap is that a valid infrastructure credential grants access to all data within a system by default. An AI agent authenticated to a database can query every table; an agent with API access to a file repository can read every file. The Teleport report’s 4.5x incident multiplier originates precisely here — in AI with legitimate infrastructure access reaching data far beyond what its function requires. Closing this gap means layering data-centric controls — purpose binding, attribute-based access control, continuous verification — on top of infrastructure IAM, not instead of it.
Purpose binding is a data-centric control that defines an AI agent’s access not by what systems it can authenticate to, but by what it is designed to do. A compliance automation agent gets access to audit trail metadata and aggregated metrics — not the underlying patient records or financial data those logs describe. An incident detection AI gets access to system logs — not customer PII or engineering configuration files. If the agent attempts to query data outside its defined purpose — even with valid infrastructure credentials — that request is blocked and flagged. This prevents the privilege creep that occurs when AI systems gradually accumulate access over time, and eliminates the rationalization that compliance or detection tools need broad access to function effectively. They don’t. They need access to the right data, scoped tightly to their function.
Static credentials — passwords, API keys, and long-lived tokens — enable AI privilege creep in two ways. First, they don’t expire, meaning access granted at deployment persists indefinitely unless manually revoked, which rarely happens as AI systems evolve. Second, they are typically scoped at issuance and never revisited even as the AI agent’s function narrows or changes. Organizations with high static credential reliance report a 67% AI incident rate versus 47% for those with low reliance. The replacement is dynamic, short-lived credential issuance with automatic expiration — ensuring AI agents re-authenticate frequently and that credentials can’t persist beyond their intended use window. But credential reform alone doesn’t solve the data access problem: an AI agent with a freshly issued dynamic token can still access every record in the system it’s authenticated to. Dynamic credentials need to be layered with purpose binding and data loss prevention controls that govern what data the agent can reach within that infrastructure.
ChatOps bots and machine-to-machine AI present a specific governance challenge: they execute at machine speed across many interactions, making human review of each request impractical. The governance model needs to shift from per-interaction human approval to pre-defined access policy enforcement. Each bot or agent should be assigned a distinct non-human identity with a defined purpose scope — the access boundary is set at deployment and enforced automatically on every request. User context inheritance is critical: when a Slack or Teams bot accesses enterprise data on behalf of a user, it should inherit that user’s data permissions rather than the broader system-level access the bot’s service account holds. Zero-trust principles apply directly — every data request is treated as untrusted until verified against current policy, regardless of the agent’s prior behavior. Immutable audit trails capturing every interaction provide the human oversight layer that can’t happen in real time.
An AI audit trail that satisfies both compliance obligations and forensic investigation needs must capture six elements for every AI-data interaction: the AI agent identity (which agent made the request), the authorizing human principal (whose permissions the agent was acting under), the data classification accessed (what category and sensitivity of data was retrieved), the timestamp and duration, the action taken (read, write, modify, transmit), and the destination if data left the system. This level of granularity enables the faster incident investigation that the Teleport report identifies as a key AI productivity benefit — you can reconstruct exactly what a compromised or misbehaving agent accessed and when. It also provides the compliance documentation that regulators require: under GDPR, HIPAA, and frameworks like NIST 800-53, demonstrating that AI systems operated within authorized boundaries requires records, not assertions. Organizations using the Kiteworks AI Data Gateway generate these trails continuously and immutably across every AI interaction with enterprise data.
Additional Resources
- Blog Post Zero Trust Architecture: Never Trust, Always Verify
- Video Microsoft GCC High: Disadvantages Driving Defense Contractors Toward Smarter Advantages
- Blog Post How to Secure Classified Data Once DSPM Flags It
- Blog Post Building Trust in Generative AI with a Zero Trust Approach
- Video The Definitive Guide to Secure Sensitive Data Storage for IT Leaders