U.K. Government AI Adoption: Shadow Risks and Gaps Exposed
The U.K. government has set out one of the most ambitious AI agendas in the world. National strategies from DSIT and the Cabinet Office. A Digital Centre of Government. An AI Playbook. A newly announced AI Skills Hub promising free training at scale. On paper, the U.K. is doing everything right.
On the ground, the picture is very different.
Key Takeaways
- Shadow AI Is Growing Across U.K. Government — and Nobody Has Visibility. Globally, 64% of public servants in low-enablement environments use personal logins for AI at work, and 70% use AI without their manager knowing. With 2 in 5 U.K. civil servants unsure what they’re even permitted to use AI for, the conditions for unchecked shadow AI use are firmly in place — putting citizen data at risk with no audit trail and no incident response capability.
- The U.K. Ranks 6th Out of 10 Countries Despite World-Class AI Ambition. The U.K. scored 47 out of 100 on the Public Sector AI Adoption Index — ahead of the U.S. (45), Germany (44), and Japan (43), but well behind Singapore (58), India (58), and Saudi Arabia (66). The gap isn’t strategy or intent. It’s execution: translating national ambition into confident, everyday AI use by frontline civil servants.
- Over Half of U.K. Civil Servants Have Received Zero AI Training. Despite scoring 51/100 on education — 6th in the index — 54% of U.K. public servants report receiving no AI training at all. Of those who have been trained, 75% say AI is easy to use, proving training works. The problem is that most civil servants never get it.
- Adoption Is Being Driven Bottom-Up, Not by Organisational Leadership. In the U.K., AI adoption runs on individual curiosity and peer support rather than organisational strategy. 46% of civil servants cite a lack of clear direction from leadership, and colleagues at work are the primary route through which U.K. public servants learn about AI — ahead of formal training or official guidance.
- Embedding Is the U.K.’s Weakest Score — and the One That Matters Most. The U.K. scored just 42/100 on embedding, meaning AI remains dependent on local initiative rather than systemic support. Across the index, 61% of workers in high-embedding environments report benefits from advanced AI use, compared with just 17% where embedding is low. Until the U.K. embeds AI into everyday workflows, productivity gains will stay confined to early adopters.
The Public Sector AI Adoption Index 2026, released by Public First for the Center for Data Innovation with sponsorship from Google, surveyed 3,335 public servants across 10 countries — including 345 in the United Kingdom. The U.K. ranks sixth out of ten, scoring 47 out of 100. That places it in the middle of the pack, behind Saudi Arabia (66), Singapore (58), India (58), South Africa (55), and Brazil (49).
For a country with this level of strategic ambition and AI ecosystem strength, that’s not good enough. And the data suggests the problem isn’t what Whitehall is saying about AI — it’s that the message isn’t reaching the people who need to hear it most.
The Numbers That Should Worry Every Government CISO
The index measures how public servants experience AI across five dimensions: enthusiasm, education, enablement, empowerment, and embedding. For the U.K., the scores reveal a workforce that is aware of AI’s potential but hasn’t been given the tools, training, or permission to act on it:
- Enthusiasm: 47/100 — ranked 8th out of 10. Only 43% of U.K. public servants feel optimistic about AI in the public sector, and just 39% describe AI as empowering.
- Education: 51/100 — the U.K.’s strongest score, but misleading. 54% of civil servants report receiving no AI training whatsoever. Of those who have been trained, 75% find AI easy to use — proving training works when it exists.
- Empowerment: 49/100 — around 2 in 5 public servants are unsure what they’re permitted to use AI for, and 46% say leaders don’t provide clear guidance on how AI should be used.
- Enablement: 47/100 — tools exist but access is uneven across departments and often not matched to everyday needs.
- Embedding: 42/100 — AI use remains dependent on local initiative rather than systemic support. Only 17% report using AI for advanced or technical tasks.
63% of U.K. civil servants report knowing “a little” or “nothing at all” about AI. More than 2 in 5 remain unconfident in their ability to use AI tools. Only 38% believe AI is being used effectively within their team.
These aren’t the numbers of a workforce that has rejected AI. These are the numbers of a workforce that has been left to figure it out on their own.
The Shadow AI Problem Hiding in Plain Sight
Here is the finding from the global index that U.K. government security leaders need to confront.
In low-enablement environments across the index, 64% of enthusiastic AI workers report using personal logins at work, and 70% use AI for work tasks without their manager knowing.
The U.K.’s enablement score sits at 47/100. Its empowerment score — the measure of whether workers feel authorised and clear on what’s permitted — is 49/100. Around 2 in 5 civil servants don’t know what they’re allowed to use AI for. That is precisely the environment where shadow AI thrives.
Think about what this means in practice for U.K. government departments. Civil servants using personal ChatGPT or Gemini accounts to draft policy documents, summarise casework, or analyse datasets containing citizen information. Sensitive data — including information protected under U.K. GDPR and the Data Protection Act 2018 — potentially being ingested into public large language models with no audit trail, no data classification controls, and no ability to determine what was exposed after the fact. Decisions shaped by AI outputs that haven’t been vetted for accuracy, bias, or appropriateness. And potential violations of data protection obligations with no forensic evidence to assess the scope.
The irony is sharp. Departments that are trying to be cautious about AI by restricting access or staying silent on permissions aren’t preventing AI use. They’re driving it underground — creating far more risk than departments that provide approved tools with clear usage guidance.
This is where the conversation needs to shift from “should we allow AI” to “how do we enable AI securely.” Solutions like Kiteworks’ Secure MCP Server represent the kind of infrastructure that can bridge this gap — enabling AI productivity with tools like Claude, ChatGPT, and Copilot while keeping sensitive data within the private network. Existing governance frameworks (RBAC/ABAC) extend to all AI interactions, every AI operation is logged for compliance and forensics, and sensitive content never leaves the trusted environment. For U.K. government organisations, alignment with U.K. GDPR, the Data Protection Act 2018, and frameworks like the NCSC’s Cloud Security Principles means these protections map directly to existing compliance obligations.
The alternative — hoping that silence and caution will keep civil servants from using AI — is a fantasy that this index has demolished with hard numbers.
The Translation Gap: Whitehall Ambition vs. Frontline Reality
The U.K. government has not been idle on AI. DSIT and the Cabinet Office have placed AI at the heart of public service modernisation. The AI Playbook provides central guidance. The newly announced AI Skills Hub aims to expand free AI training. There is genuine intent to move beyond pilots and specialist teams toward broader, everyday use.
But the index reveals a stark disconnect between that national intent and the experience of frontline civil servants.
While 60% of U.K. public servants say AI use has increased over the past year, adoption remains largely confined to basic tasks like drafting and analysis. Fewer than one in three use AI to improve workflows, and only 17% report using it for advanced or technical tasks. The U.K. has high awareness of AI’s potential — most say AI is easy to use (60%), effective (52%), and believe it can save time (66%) — but far fewer have experienced those benefits in their daily work.
The critical difference between the U.K. and the advanced adopters is not technology. It’s the infrastructure of confidence.
In Singapore, 73% of public servants are clear on what they can and cannot use AI for, and 58% know exactly who to ask when they hit a problem. Central agencies provide shared platforms, approved tools, and practical guidance. In Saudi Arabia, a top-down national strategy linked to Vision 2030 has made AI feel like modernisation rather than risk, with 65% accessing enterprise-level AI tools and 79% using AI for advanced tasks. In India, 83% are optimistic about AI and 59% want it to dramatically change their daily work.
In the U.K., by contrast, adoption is currently driven bottom-up — by individual curiosity and peer support rather than organisational momentum. Colleagues at work are the primary route through which U.K. civil servants learn about AI, ahead of formal training or official guidance. That organic enthusiasm is valuable, but it’s no substitute for systemic enablement.
The Missing Layer: AI Data Governance for U.K. Government
The U.K.’s challenge is not just about training or leadership messaging. It’s about the absence of a data governance infrastructure that makes secure AI use possible at scale.
Most U.K. government departments lack visibility into what data is being shared with AI systems. Which civil servants are using AI, and for what purposes? Whether AI-generated outputs contain sensitive information that shouldn’t be shared externally? How to enforce data classification policies when AI tools are involved? For most departments, the honest answer is “we don’t know.”
This is where AI data governance frameworks become essential — not as a barrier to adoption, but as the foundation that makes confident adoption possible. Data security posture management (DSPM) capabilities can discover and classify sensitive data across repositories, including data being ingested into AI systems. Automated policy enforcement can block privileged or confidential data from AI ingestion based on classification labels. Comprehensive audit logs can track all AI-data interactions. And when aligned with U.K. GDPR, the Data Protection Act 2018, and the NCSC’s guidance on cloud and AI security, these capabilities help departments govern AI risks throughout the data life cycle.
Kiteworks’ approach to this challenge is instructive. By integrating DSPM with automated policy enforcement and immutable audit logging, organisations can tag data by sensitivity level — public, official, official-sensitive, secret — and enforce those classifications automatically when AI tools are involved. Every AI-data interaction is captured with user ID, timestamp, data accessed, and the AI system used. This isn’t just a compliance exercise; it’s the infrastructure that makes confident AI adoption possible at scale.
Without this layer, U.K. government departments are flying blind on AI risk. With it, they can say “yes” to AI use with the confidence that citizen data is protected — which is precisely what civil servants are asking for.
What U.K. Civil Servants Are Asking For
The U.K. factsheet reveals a workforce that is ready but waiting. Civil servants don’t need to be convinced that AI has potential — most already believe it can save time (66%), and 60% say it’s easy to use. What they need is the clarity and infrastructure to act on that belief.
The index data from the U.S. — where public servants were asked directly what would encourage more AI use — provides a clear signal that applies equally to the U.K. context. Clear, practical guidance (38%), easier-to-use tools (36%), and data privacy assurance (34%) topped the list. Dedicated budget ranked last at 12%.
In the U.K., the data tells a consistent story. 46% cite a lack of clear direction from leadership. 54% have received no training. Around 2 in 5 are unsure what they’re permitted to do with AI. And 44% say training feels like an afterthought in their organisation.
Civil servants aren’t asking for sweeping new programmes. They’re asking for clarity, usability, and confidence — things that can be delivered through policy, communication, and smart procurement.
Why Embedding Matters More Than Anything Else
The U.K. scored 42/100 on embedding — reflecting early or uneven integration, with AI use dependent on local initiative rather than systemic support. That’s a problem, because the index data shows embedding is the dimension that unlocks the most value.
Across all countries, 61% of workers in high-embedding environments report benefits from using AI for advanced or technical work, compared with just 17% where embedding is low. Embedding also levels the playing field across age groups: In high-embedding environments, 58% of public servants aged 55 and older report saving over an hour of time using AI, compared with just 16% in low-embedding settings.
The U.K. currently sits at the wrong end of this spectrum. Only 17% of civil servants report using AI for advanced or technical tasks. Until AI is woven into the systems and workflows civil servants already use, the productivity promise of the government’s AI agenda will remain theoretical.
Three Priorities That Could Change the Trajectory
The index points to three actions that could rapidly lift AI adoption across U.K. public services if pursued together.
First, make permission explicit and operational — backed by secure infrastructure. The AI Playbook and national strategies need to be reinforced through consistent, visible signals from departmental leadership. But permission without protection creates risk. Departments should deploy enterprise AI solutions with data protection controls, governance frameworks, and comprehensive logging — ensuring sensitive data never leaves the trusted environment. Platforms like Kiteworks’ Secure MCP Server demonstrate how this works in practice: enabling AI productivity across tools like Claude, ChatGPT, and Copilot while maintaining the data governance controls U.K. government organisations require. When civil servants know the tools they’re using are approved, compliant, and monitored, hesitation gives way to confidence.
Second, build confidence through evidence and incident readiness. U.K. civil servants recognise AI’s potential but fewer than two in five describe it as empowering. Sharing concrete case studies — where AI has reduced administrative burden, improved service triage, or supported better policy analysis — would help make AI tangible rather than theoretical. But confidence also requires knowing what happens when something goes wrong. Consider the scenario: A civil servant accidentally pastes thousands of National Insurance numbers into a public AI tool for data analysis. The data is now in the provider’s systems — potentially stored indefinitely or exposed to other users. Can the department answer what was exposed, when, by whom, and what other sensitive data has been shared? Without immutable audit logs, SIEM integration for real-time monitoring, and chain-of-custody documentation, the answer is no. Incident response capabilities for AI-specific scenarios aren’t optional — they’re the price of admission for responsible adoption.
Third, shift from awareness to confidence through role-specific training. The U.K.’s new AI Skills Hub is a step in the right direction, but the index data shows that introductory, generic training isn’t enough. 44% of civil servants say training feels like an afterthought. Short, practical training tailored to specific roles — showing civil servants how AI supports tasks they already perform, with templates, shared prompts, and worked examples — is what bridges the gap between awareness and confident use. Partnering with trusted technology providers can help deliver training at scale while also providing the assurance around security and data protection that civil servants need.
The Stakes Are Higher Than Rankings
The U.K. ranking sixth in this index is disappointing given the level of national ambition, but the real cost isn’t reputational. It’s operational. Every day that civil servants lack secure, approved AI tools is another day of government data flowing through personal accounts with no oversight. Every week without clear, departmental-level guidance is another week of productivity gains left unrealised. Every month without embedded AI governance is another month where the gap between the U.K.’s AI ambition and its public sector reality widens.
Shadow AI is already here. Seventy percent of public servants worldwide use AI; many are doing it outside approved channels. The U.K.’s combination of high awareness, low enablement, and unclear permissions creates the perfect conditions for unsanctioned AI use to proliferate — putting citizen data at risk under U.K. GDPR and the Data Protection Act 2018.
The 345 U.K. public servants surveyed in this index are sending a clear message: Give us the guidance, give us the secure tools, and get out of the way. The question is whether government leaders are listening — and whether they’re willing to solve the shadow AI problem before it becomes a full-blown data protection crisis.
Frequently Asked Questions
The Public Sector AI Adoption Index 2026 is a global study by Public First for the Center for Data Innovation, sponsored by Google. It surveyed 3,335 public servants across 10 countries — including 345 in the United Kingdom — to measure how AI is experienced in government workplaces. The index scores countries across five dimensions: enthusiasm, education, empowerment, enablement, and embedding, each on a 0–100 scale. It goes beyond measuring whether governments have AI strategies and examines whether public servants have the tools, training, permissions, and infrastructure to use AI effectively in their daily roles.
The U.K. ranks 6th out of 10 countries with an overall score of 47 out of 100. It scores highest on education (51/100), reflecting growing availability of AI training, but lowest on embedding (42/100), meaning AI is rarely integrated into everyday workflows and remains dependent on local initiative. The U.K. falls behind advanced adopters like Saudi Arabia (66), Singapore (58), and India (58), as well as South Africa (55) and Brazil (49), but sits ahead of the U.S. (45), Germany (44), Japan (43), and France (42). The index characterises the U.K. as an “uneven adopter” — a country with strong central ambition but persistent friction in translating that ambition into confident, everyday use by frontline civil servants.
Shadow AI refers to public servants using unapproved AI tools — often personal accounts for services like ChatGPT — for work tasks without their organisation’s knowledge or oversight. The Public Sector AI Adoption Index found that in low-enablement environments, 64% of enthusiastic AI users rely on personal logins at work and 70% use AI without their manager knowing. This creates serious security risks for U.K. government departments: Sensitive citizen data may be ingested into public large language models with no audit trail, no data protection controls, and no ability to determine what was exposed in the event of a breach. Shadow AI also creates potential violations of U.K. GDPR and the Data Protection Act 2018, with no forensic evidence to assess the scope of data exposure.
The index data points to an execution gap rather than an ambition gap. The U.K. has strong national strategies, central leadership from DSIT and the Cabinet Office, and delivery bodies like the Digital Centre of Government. However, this intent has not consistently translated into frontline practice. 54% of civil servants report receiving no AI training. Around 2 in 5 are unsure what they’re permitted to use AI for. 46% say leaders don’t provide clear guidance. And adoption is driven bottom-up by individual curiosity rather than organisational momentum. The result is fragmented AI use largely confined to basic tasks like drafting and analysis, with only 17% reporting use for advanced or technical work.
The index data — and the experience of advanced adopter countries — suggests departments need to shift from restricting AI access to enabling it securely. This means deploying approved enterprise AI tools with built-in AI data governance controls, such as platforms that keep sensitive data within the private network while enabling productivity with AI assistants like Claude, ChatGPT, and Copilot. Departments should implement data security posture management (DSPM) to classify sensitive data and enforce policies automatically, maintain immutable audit logs for all AI-data interactions, and establish incident response capabilities specific to AI data exposure scenarios. Solutions like Kiteworks’ Secure MCP Server, aligned with U.K. GDPR, the Data Protection Act 2018, and the NCSC’s Cloud Security Principles, demonstrate how departments can enable AI productivity without sacrificing data security or compliance.
Saudi Arabia (66/100), Singapore (58/100), and India (58/100) are the top-ranked countries in the index. Each took a different path but shared common elements: clear rules on what public servants can and cannot use AI for, approved and secure tools provided through the organisation, and visible leadership support that framed AI as modernisation rather than risk. Singapore built centralised platforms with standardised guidance through its Smart Nation initiative. Saudi Arabia executed a top-down national strategy tied to Vision 2030 with enterprise-wide AI rollout. India drove adoption through cultural momentum with free government-hosted AI courses and consistent positive messaging. The U.K. has comparable strategic ambition but has not yet matched it with the consistent departmental-level execution, clear permissions, and systemic tool provision that these countries have achieved.