Agentic AI Governance Lags Adoption in the UK: What the Salesforce 2026 Connectivity Benchmark Report Means for Enterprise Compliance
You can deploy a dozen AI agents across your enterprise. You can project 67% growth over two years. You can call it a digital transformation initiative and put it in the annual report.
But if half those agents are running in departmental silos and you cannot tell a regulator which ones accessed personal data last Tuesday—you do not have an AI strategy. You have a compliance incident that has not found its trigger yet.
That is the uncomfortable picture behind Salesforce’s 2026 Connectivity Benchmark Report, produced by Salesforce-owned MuleSoft in collaboration with Vanson Bourne and Deloitte Digital. The research surveyed 1,050 IT professionals across nine countries between October and November 2025, including 100 from the UK. The findings do not just flag a governance gap. They quantify a crisis building at the intersection of rapid AI adoption, fragmented data infrastructure, and a regulatory environment that has stopped accepting “trust us” as a compliance posture.
Here is what the numbers show—and why they should change how every CISO, compliance officer, and digital transformation leader thinks about their organization’s AI deployment strategy.
Five Key Takeaways
1. Adoption Has Lapped Governance. It Is Not Even Close. Salesforce’s 2026 Connectivity Benchmark Report found that 89% of UK and Ireland organizations deploy AI agents—but only 54% have a centralized governance framework with formal oversight. Nearly half the enterprises running AI agents cannot tell a regulator which ones accessed personal data, when, or under what controls. That is not a gap. It is an open door.
2. Half of All AI Agents Are Running in the Dark. Fifty percent of deployed agents are compartmentalized into departmental silos—not integrated, not governed, not visible at the enterprise level. These are agents accessing sensitive data with no centralized oversight. If you do not know what your agents are touching, you cannot protect it. And you certainly cannot prove you protected it when the auditor asks.
3. The Agent Footprint Is About to Explode. Governance Is Not Keeping Pace. Organizations run an average of 12 AI agents today. That number is projected to grow 67% in two years. Meanwhile, 75% of respondents worry that agents will create more complexity than business value. They are right to worry. Without governance, more agents means more risk—compounding, not linear.
4. Shadow AI Is Shadow IT With Regulatory Teeth. Employees are uploading contracts, customer records, and IP into consumer AI tools without IT’s knowledge. Unlike the shadow IT problems of a decade ago, shadow AI triggers direct violations of the EU AI Act, GDPR Article 25, and NIS2—each carrying fines that can reach tens of millions of euros. The risk profile is not comparable. It is an order of magnitude worse.
5. The Data Infrastructure Cannot Support What Organizations Are Asking Agents to Do. Ninety-seven percent of organizations reported barriers to using data for AI. That number is not a survey artifact. It is near-universal acknowledgment that the plumbing does not match the ambition. Organizations are running 796 applications on average, with only a third integrated. AI agents need data. The infrastructure is not ready to give it to them safely.
89% Deployed. 50% Ungoverned. Do the Math.
Let’s start with the finding that reframes everything else in the report.
Eighty-nine percent of UK and Ireland organizations already deploy AI agents. Not piloting. Not evaluating. Deploying. These agents are in production environments, accessing customer records, financial data, contracts, intellectual property, and operational systems across departments, business units, and geographies.
But half of those agents—fifty percent—are compartmentalized into departmental silos. They are not integrated at the enterprise level. They do not report to a centralized governance framework. No single team has visibility into what data those agents are accessing, how they are processing it, or whether that processing complies with the regulations the organization is subject to.
Read that again. Half of the AI agents operating inside UK enterprises today are invisible to enterprise governance.
Only 54% of organizations have a centralized governance framework with formal oversight for their AI agents. That means nearly half the enterprises running these systems are doing so without the foundational controls that regulators—across the EU AI Act, GDPR, NIS2, and the Cyber Resilience Act—are now requiring.
Andrew Comstock, SVP and general manager of MuleSoft, framed the challenge: “The true success of an agentic enterprise isn’t found in the sheer number of agents deployed, but the overall effectiveness of those agents. We need to think about how they are discovered, governed and orchestrated to work together.”
That orchestration is not happening. And the regulatory clock is not waiting for it.
You Trust Your Organization is Secure. But Can You Verify It?
Shadow AI Is Not Shadow IT. The Stakes Are Orders of Magnitude Higher.
The governance gap in the Salesforce report has a corollary that security teams have been shouting about for the past year. Most boardrooms have not been listening.
Shadow AI is what happens when employees and business units deploy AI agents, upload sensitive data to consumer AI tools, and build AI-assisted workflows—all without IT’s knowledge or approval. It is the predictable consequence of lowering the technical bar to build AI capabilities while leaving governance frameworks exactly where they were three years ago.
Kurt Anderson, managing director and API transformation leader with Deloitte Consulting, acknowledged the tension at the report’s press briefing: “There is an important leadership principle of putting knowledge in the hands of the people who can deploy it.” But he followed it with a warning that deserves to be printed and taped to every CIO’s monitor: “We do have to learn from the past, and make sure we have the right governance. We know what happens when you let individual contributors loose in terms of building systems that lack security or reliability or run up a big licensing bill.”
The shadow IT parallel is instructive, but the comparison understates the risk. When an employee adopted an unsanctioned SaaS tool a decade ago, the downside was a licensing headache or a data silo. When an employee uploads a customer contract, a financial model, or a set of patient records to ChatGPT or Claude today, the downside is a GDPR violation, an EU AI Act transparency failure, and a potential data breach—simultaneously. Three regulatory frameworks. One careless upload.
The Salesforce data makes the soil fertile for exactly this kind of incident. Organizations are running 796 applications on average, with only 33% integrated. In an environment where hundreds of systems operate in isolation, shadow AI does not just appear at the edges. It fills the cracks that disconnected systems leave behind. And there are a lot of cracks.
Four Regulations. One Governance Gap. Compounding Fines.
The governance gap in this report is not just an operational headache. It is a regulatory exposure problem—and the financial math is sobering.
European enterprises now face a converging set of regulations that each impose specific requirements on how AI systems access, process, and protect sensitive data. Siloed AI agents that lack centralized oversight do not just fail one of these frameworks. They fail all of them at once.
The EU AI Act requires data governance, transparency, and security controls for high-risk AI systems. Agents deployed in departmental silos—without audit trails, without access controls, without centralized reporting—fail these obligations by design. Not by accident. By architecture. Penalties reach up to €35 million or 7% of global revenue.
GDPR requires data protection by design under Article 25 and data minimization under Article 5. When agents operate in silos, organizations cannot answer the fundamental question regulators will ask: “Which AI systems accessed this person’s data, and under what conditions?” If you cannot answer that question, you are not compliant. It does not matter what your privacy policy says. Penalties reach up to €20 million or 4% of global revenue.
The NIS2 Directive requires organizations to manage risks across their digital supply chains. Ungoverned AI agents are an unmonitored attack surface accessing critical infrastructure data without visibility or incident reporting controls. Penalties reach up to €10 million or 2% of global revenue.
The Cyber Resilience Act mandates security-by-design for software products, including AI agents. Agents deployed without formal security controls, access management, or vulnerability monitoring fall short. No exceptions for “we were moving fast.”
The upcoming EU Digital Omnibus Bill will further align these frameworks into an interconnected regulatory web. A single governance failure will cascade across multiple compliance obligations. Organizations that treat each regulation in isolation—or worse, assume AI deployments are not covered yet—are building up exposure they cannot see until the enforcement action arrives.
97% of Organizations Have Data Barriers for AI. That Number Is Not a Typo.
The Connectivity Benchmark Report reveals a structural layer underneath the governance crisis that makes everything harder to fix: the data infrastructure itself is not ready.
Ninety-seven percent of organizations reported barriers to using data for AI use cases. Not most. Not a majority. Ninety-seven percent. The top blocker, cited by 35% of respondents, was outdated IT architecture and infrastructure driven by data silos and disconnected systems.
Enterprises are running an average of 796 applications with only a third integrated. AI agents need access to data to deliver value. But in environments where hundreds of applications operate in isolation, granting that access without a unified governance layer forces a choice between two bad options: restrict agents to the point of uselessness, or open data access without adequate controls. Most organizations are choosing the second option—whether they realize it or not.
Beena Ammanath, Global Deloitte AI Institute leader, framed the velocity in the report’s foreword: “AI adoption speed has surpassed predictions from even a year ago, with 84% of enterprise CIOs believing that AI will be as important to their businesses as the internet.” Forty percent of organizations already deploy autonomous agents. Another 41% plan to within the next year. The window for building governance infrastructure is not closing. For many organizations, it has already closed.
This is not an isolated finding. Process mining provider Celonis published its own 2026 Process Optimisation Report—surveying 1,600 global executives at companies with $500 million or more in revenue—and found the same pattern from a different angle. Eighty-one percent said AI projects will fail without process visibility. Seventy-six percent said their current processes are holding them back. Eighty-five percent want to be an agentic enterprise within three years. The ambition is universal. The readiness is not.
AI Agents Are Already Touching Your Most Sensitive Data. Every Day.
If governance still feels like a planning-cycle priority rather than a right-now priority, consider where AI agents are already operating—and what data they are already touching.
A separate Salesforce report published the same week—the seventh edition of its global State of Sales report—found that agentic AI has entered the top three UK sales techniques for 2026. Top-performing salespeople are 1.7 times more likely to use AI agents than those who struggle to hit their numbers. Forty-six percent of UK sellers say they have already used agents. UK sales teams expect agents to cut prospect research time by 38% and email drafting time by 38%.
Those are impressive productivity numbers. They are also a governance test that most organizations are failing. These agents are routinely accessing customer databases, CRM records, communication histories, pricing information, and contract details—every one of which carries regulatory obligations under GDPR and, depending on the sector, under NIS2 and the EU AI Act.
Here is the difference that matters: when those agents operate under proper governance—unified access controls, audit trails, data minimization—they deliver measurable productivity gains. When they operate without it, they become compliance liabilities that compound with every new deployment. Same technology. Different architecture. Entirely different risk profile.
What a Governed AI Data Architecture Looks Like—And Why Most Organizations Do Not Have One
The question is no longer whether to adopt AI agents. That decision has been made. Eighty-nine percent of UK organizations are there. The question is whether governance will arrive before the next audit, the next breach, or the next enforcement action.
Getting there requires a fundamental shift in how organizations think about AI data access. Bolting governance onto existing deployments after the fact does not work—the Salesforce report just proved that with 50% of agents still running in ungoverned silos. Enterprises need a centralized data governance layer that sits between AI agents and the sensitive information they access. That layer must deliver five capabilities.
Unified access control for AI agents. One platform governing which agents access which data, under what conditions, with what permissions. This prevents siloed deployments from bypassing enterprise security and extends zero-trust architecture to every AI agent interaction with sensitive data.
Complete audit trails for every AI interaction. When regulators ask which AI systems accessed personal data—and under the EU AI Act and GDPR, they will—organizations need clear, comprehensive answers. Not a three-week forensics project stitched together from six different tools. Answers. On demand.
Data minimization enforcement. AI agents should access only the data they need for a specific task. Granular controls supporting redaction, masking, and time-limited access grants directly support GDPR’s Article 5—and prevent the over-permissioning that turns every agent into a potential breach vector.
Shadow AI prevention. Detect and block employees from feeding sensitive data to unauthorized AI tools—while providing governed alternatives for AI-assisted workflows. Data loss prevention policies must extend to AI-specific exfiltration vectors. Traditional DLP was not built for this. The tooling has to catch up.
Third-party AI vendor management. Secure data exchange protocols, Data Processing Agreements, and continuous monitoring of third-party AI access. NIS2 supply chain security requirements make this non-negotiable—not aspirational.
Kiteworks: Governed AI Data Access. From One Platform. Not Six.
This is the problem the Kiteworks Private Data Network is built to solve.
As enterprises scale their AI deployments, Kiteworks provides the unified data governance layer that controls, monitors, and audits every AI agent interaction with sensitive data. It does not sit alongside existing governance tools and hope they talk to each other. It replaces the gap between them—the uncontrolled space where AI agents access contracts, customer records, financial data, and intellectual property with no centralized oversight, no audit trail, and no way to prove compliance when the question comes.
The distinction from other approaches matters. Traditional secure file sharing platforms—Box, Dropbox—were not built with AI-specific governance controls. They cannot prevent data exfiltration to consumer AI tools. They do not provide audit trails for AI agent access. They do not address the EU AI Act. They address a problem that stopped being sufficient two years ago.
Detection-only DLP tools from Symantec, Forcepoint, and others can flag violations after they happen. They do not provide governed data access for approved AI workflows. Their model is block-or-allow. The AI governance problem requires enable-and-control. Those are different capabilities.
Emerging AI governance platforms focus on model governance—monitoring model behavior, tracking model outputs, managing model risk. They do not control the underlying sensitive data that AI agents access. Model governance without data governance is like locking the front door while leaving the data vault open.
Kiteworks occupies the position none of these approaches cover: governing the sensitive data that AI agents need to access, ensuring every interaction meets compliance requirements across GDPR, NIS2, the Cyber Resilience Act, and the EU AI Act—from a single platform. One audit trail. One access control framework. One place to answer the regulator’s question.
For CISOs, it is the centralized control plane that eliminates the risk of siloed agent deployments. For compliance officers, it is the audit trail and transparency documentation that EU regulators require. For digital transformation leaders, it is the governance infrastructure that unblocks AI scaling—without slowing down the business units that need it most.
The Window Is Closing. For Some Organizations, It Already Has.
The Salesforce 2026 Connectivity Benchmark Report makes the situation concrete. Eighty-nine percent of UK organizations have AI agents deployed. Half are running in ungoverned silos. The agent footprint is growing 67% in two years. European regulations are converging into a unified compliance framework with penalties that reach hundreds of millions of euros.
Organizations that build a governed AI data architecture now will scale AI safely, satisfy regulators on demand, and turn AI productivity into competitive advantage. Organizations that do not will discover their governance gap through a regulatory investigation, a data breach, or a fine that makes the cost of governance look like a rounding error.
This is no longer a planning exercise. It is an operational requirement. And the question is not whether your organization needs a data governance layer for AI agents. The question is whether you will build one before the next audit forces you to explain why you did not.
Frequently Asked Questions
The agentic AI governance gap is the widening disconnect between how fast enterprises deploy AI agents and how slowly they implement the governance frameworks, security controls, and compliance mechanisms required to manage them. Salesforce’s 2026 Connectivity Benchmark Report found that 89% of UK organizations deploy AI agents, but only 54% have a centralized governance framework with formal oversight—leaving nearly half of all enterprise AI deployments operating without the controls that regulators across the EU AI Act, GDPR, NIS2, and the Cyber Resilience Act now require.
The EU AI Act classifies certain AI systems as high-risk and requires them to meet standards for data governance, transparency, human oversight, and security. Enterprise AI agents that access sensitive data, make automated decisions affecting individuals, or operate in regulated sectors like financial services, healthcare, and critical infrastructure fall under these requirements. Agents deployed in departmental silos without audit trails, access controls, or centralized reporting fail these obligations by design. Non-compliance can result in fines of up to €35 million or 7% of global revenue.
Shadow AI describes the use of unauthorized AI tools and services by employees without IT knowledge or approval—including uploading sensitive customer data, contracts, or intellectual property to consumer AI platforms like ChatGPT, Claude, or Gemini. Unlike traditional shadow IT, shadow AI creates direct violations of GDPR (unauthorized personal data processing), the EU AI Act (unregistered high-risk AI use), and NIS2 (unmonitored access to critical data)—each carrying fines that can reach tens of millions of euros.
According to Salesforce’s 2026 Connectivity Benchmark Report, enterprises deploy an average of 12 AI agents, with that number projected to grow 67% over the next two years. However, 50% of these agents operate in departmental silos without enterprise-level integration or governance, creating fragmented data access and compliance blind spots that regulators are increasingly equipped to identify.
Multiple converging European regulations impose specific governance requirements on enterprise AI agents: the EU AI Act (data governance, transparency, and security for high-risk AI systems, with fines up to €35 million or 7% of revenue), GDPR (data protection by design under Article 25 and data minimization under Article 5, with fines up to €20 million or 4% of revenue), NIS2 (supply chain risk management for critical infrastructure, with fines up to €10 million or 2% of revenue), and the Cyber Resilience Act (security-by-design for software products). The upcoming EU Digital Omnibus Bill will further align these frameworks into an interconnected compliance web.
A data governance layer for AI agents is a centralized platform that controls which AI systems can access which sensitive data, under what conditions, and with what level of permission. It provides complete audit trails for every AI interaction, enforces data minimization, prevents unauthorized AI data access through DLP controls, manages third-party AI vendor access, and enables compliance reporting across regulatory frameworks. The Kiteworks Private Data Network serves as this governance layer—enabling enterprises to adopt AI agents safely while maintaining compliance with GDPR, NIS2, the EU AI Act, and the Cyber Resilience Act from a single unified platform.
Kiteworks Private Data Network provides the unified data governance layer that controls, monitors, and audits every AI agent interaction with sensitive data. Unlike traditional file sharing platforms that lack AI-specific controls, detection-only DLP tools that block without enabling governed access, or emerging AI governance platforms that address model risk but not data governance, Kiteworks governs the sensitive data AI agents access—delivering unified access control, complete audit trails, data minimization enforcement, shadow AI prevention, and third-party AI vendor management from a single platform that satisfies compliance requirements across all converging European regulations.
Additional Resources
- Blog Post Zero Trust Architecture: Never Trust, Always Verify
- Video Microsoft GCC High: Disadvantages Driving Defense Contractors Toward Smarter Advantages
- Blog Post How to Secure Classified Data Once DSPM Flags It
- Blog Post Building Trust in Generative AI with a Zero Trust Approach
- Video The Definitive Guide to Secure Sensitive Data Storage for IT Leaders