Shadow AI Risks in Government: Solving the Hidden Crisis
The United States is home to the most advanced AI ecosystem on the planet. American companies are building the frontier models. American researchers are publishing the breakthrough papers. American venture capital is fueling the next generation of AI startups.
And yet, when it comes to putting AI to work inside government, the U.S. is falling behind countries it should be lapping.
Key Takeaways
- Shadow AI Is Already a Government-Wide Security Risk. In agencies that haven't provided approved AI tools, 64% of public servants use personal logins at work and 70% use AI without their manager knowing. Government data — including citizen PII, tax records, and law enforcement files — is flowing through unvetted consumer AI tools with no audit trail, no oversight, and no incident response capability.
- The U.S. Ranks 7th Out of 10 Countries Despite Leading Global AI Development. The United States scored just 45 out of 100 on the Public Sector AI Adoption Index, landing behind South Africa and Brazil. The gap isn't technology — it's governance, clear guidance, and secure infrastructure that gives public servants the confidence to use AI as part of their daily work.
- Restricting AI Access Creates More Risk Than Enabling It Securely. Organizations that take a "move carefully" approach by limiting AI access aren't stopping usage — they're pushing it underground. The index data shows that secure enablement with approved tools, data governance controls, and comprehensive audit logging is the only approach that reduces risk while unlocking productivity gains.
- Embedding AI Into Workflows Is What Unlocks Real Value. The U.S. scored just 39 out of 100 on embedding — the lowest of all five index dimensions. That matters because 61% of public servants in high-embedding environments report benefits from advanced AI use, compared with just 17% where embedding is low. When AI is integrated into the systems people already use, the productivity gains extend across all age groups and skill levels.
- Public Servants Aren't Asking for Budget — They're Asking for Clarity and Security. When asked what would encourage more frequent AI use, U.S. public servants put clear guidance (38%), easier-to-use tools (36%), and data privacy assurance (34%) at the top. Dedicated budget ranked last at 12%. The barriers to adoption are solvable through policy, communication, and smart procurement — not massive new spending.
The Public Sector AI Adoption Index 2026, released recently by Public First for the Center for Data Innovation with sponsorship from Google, surveyed 3,335 public servants across 10 countries — including 301 in the United States. The U.S. ranks seventh out of ten, scoring just 45 out of 100. That places it below South Africa and Brazil, and far behind advanced adopters like Singapore (58), Saudi Arabia (66), and India (58).
This isn’t a technology problem. It’s a governance, security, and leadership problem — and it’s creating a massive shadow AI risk that most government IT leaders are ignoring.
The Numbers That Should Keep Every Government CISO Up at Night
The index measures how public servants experience AI across five dimensions: enthusiasm, education, enablement, empowerment, and embedding. For the U.S., the scores paint a picture of a workforce that has access to AI but hasn’t been given the confidence, clarity, or secure infrastructure to use it well:
- Enthusiasm: 43/100 — one of the lowest scores globally. Forty percent of U.S. public servants describe AI as “overwhelming.”
- Education: 50/100 — training exists but tends to be introductory and unevenly distributed across organizations.
- Empowerment: 46/100 — more than one in three public servants don’t know whether their organization even has a formal AI policy.
- Enablement: 45/100 — tools are available, but “access” doesn’t mean “secure” or “compliant.”
- Embedding: 39/100 — the lowest score of all five. AI tools sit alongside legacy systems rather than being integrated into workflows.
Nearly half of U.S. public servants (45%) say their organization should “move carefully to avoid mistakes.” Fewer than half feel they receive clear direction from leadership on how AI should be used. Only 56% say they feel confident using AI tools.
These aren’t the numbers of a workforce resisting AI. These are the numbers of a workforce waiting for someone to tell them it’s safe to proceed.
The Shadow AI Time Bomb
Here is the finding that should terrify every government CISO.
In low-enablement environments across the index, 64% of enthusiastic AI workers report using personal logins at work, and 70% use AI for work tasks without their manager knowing.
When governments fail to provide approved AI tools, clear policies, or accessible support, public servants don’t stop using AI. They just do it on their own — outside the guardrails. And the consequences are far more serious than a compliance checkbox.
Think about what this means in practice for U.S. federal agencies. Government data flowing through personal ChatGPT accounts with no oversight, no audit trail, and no data security controls. Sensitive citizen information — PII/PHI, tax records, law enforcement data — potentially being ingested into public LLMs for summarization, analysis, or drafting. Policy decisions shaped by AI tools that haven’t been vetted for accuracy, bias, or appropriateness. And potential compliance violations across HIPAA, FISMA, and state privacy laws — with no forensic evidence to determine scope.
The irony is painful. Organizations trying to be “cautious” about AI by restricting access are creating far more risk than organizations that provide approved tools with clear usage guidance. The index data bears this out across every country studied.
This is where the conversation needs to shift from “should we allow AI” to “how do we enable AI securely.” Solutions like Kiteworks’ Secure MFT Server represent the kind of infrastructure that can bridge this gap — enabling AI productivity with tools like Claude, ChatGPT, and Copilot while keeping sensitive data within the private network. Existing governance frameworks (RBAC/ABAC) extend to all AI interactions, every AI operation is logged for compliance and forensics, and sensitive content never leaves the trusted environment. For federal agencies, FedRAMP Moderate Authorization and alignment with White House AI memoranda and the NIST AI Risk Management Framework mean these protections map directly to existing compliance obligations.
The alternative — pretending that restricting access will stop usage — is a fantasy that the index data has now demolished with hard numbers.
The Translation Gap: Federal Ambition vs. Frontline Reality
The federal government has not been sitting still. Nearly 90% of federal agencies are already using AI in some form, according to recent research from Google. White House Executive Orders and America’s AI Action Plan have positioned AI as a strategic priority. The Office of Management and Budget has issued updated guidance on AI governance and procurement. State CIOs have ranked AI as their number one priority for 2026, with over 90% of states at least piloting AI projects.
So the building blocks are in place. But the index reveals a stark translation gap between agency-level activity and frontline experience.
U.S. public servants report some of the highest levels of personal AI experience in the index. Seventy-six percent say they use AI in their personal lives, and nearly three-quarters (72%) of those also use AI at work. Almost nine in ten (89%) who use AI at work report having access to tools through their organization. Around a third (32%) have access to enterprise-grade AI tools — a higher share than many other countries.
But that access hasn’t translated into confident, enthusiastic use. The U.S. scored just 43/100 on enthusiasm — meaning most public servants have yet to see clear, role-specific benefits in their day-to-day tasks. AI is more often described as overwhelming than empowering. Fewer workers report experiencing tangible gains like time savings or AI functioning as an effective assistant.
And the “access” that does exist often lacks the security and governance controls that federal environments demand. Most agencies provide generic AI tools without data protection agreements. There are no audit trails tracking what data was shared with AI systems, when, or by whom. No ability to revoke access or delete data from AI training sets after the fact. The enablement score reflects availability, not safety — and that distinction matters enormously when government data is involved.
Compare this to what the advanced adopters have built. In Singapore, 73% of public servants are clear on what they can and cannot use AI for, and 58% know exactly who to ask when they hit a problem. In Saudi Arabia, a top-down national strategy has made AI feel like modernization rather than disruption, with 65% accessing enterprise-level AI tools and 79% using AI for advanced or technical tasks. In India, 83% of public sector workers are optimistic about AI and 59% want it to dramatically change their daily work.
Those countries didn’t succeed because they had better technology than the U.S. They succeeded because they made it easier for public servants to use AI with confidence. Clear rules. Approved tools. Visible support. The U.S. has the technology advantage — what it lacks is the connective tissue.
The Missing Layer: AI Data Governance
The index found that U.S. public servants want “clear, practical guidance on how to apply AI in the public sector” (38%) and “assurance of data privacy and security” (34%). These aren’t abstract preferences. They point to a foundational gap that sits underneath every other adoption challenge: Most government agencies lack visibility into what data is being shared with AI systems.
Which employees are using AI and for what purposes? Whether AI-generated outputs contain sensitive information that shouldn’t be shared externally? How to enforce data classification policies when AI tools are involved? For most agencies, the honest answer to all of these questions is “we don’t know.”
This is where AI data governance frameworks become essential — not as a barrier to adoption, but as the foundation that makes confident adoption possible. Data Security Posture Management (DSPM) capabilities can discover and classify sensitive data across repositories, including data being ingested into AI systems. Automated policy enforcement can block privileged or confidential data from AI ingestion based on classification labels. Comprehensive audit logs can track all AI-data interactions. And when aligned with the NIST AI Risk Management Framework, these capabilities help agencies govern, map, and manage AI risks throughout the data life cycle.
Kiteworks’ approach to this challenge is instructive. By integrating DSPM with automated policy enforcement and immutable audit logging, organizations can tag data by sensitivity level — public, internal, confidential, classified — and enforce those classifications automatically when AI tools are involved. Every AI-data interaction is captured with user ID, timestamp, data accessed, and the AI system used. This isn’t just a compliance exercise; it’s the infrastructure that makes confident AI adoption possible at scale.
Without this layer, governments are flying blind on AI risk. With it, they can say “yes” to AI use with the confidence that sensitive data is protected — which is precisely what public servants are asking for.
What U.S. Public Servants Are Asking For
When asked what would encourage them to use AI tools more frequently, U.S. public servants were remarkably specific:
- Clear, practical guidance on how to apply AI in the public sector (38%)
- Easier-to-use tools that don’t require specialist technical skills (36%)
- Assurance of data privacy and security (34%)
- Training or upskilling support tailored to specific roles (30%)
- Better integration with existing software and systems (29%)
Notice what is not at the top of the list: Dedicated budget (12%) and senior management endorsement (20%) ranked near the bottom. Public servants are not asking for sweeping new programs or expensive initiatives. They are asking for clarity, usability, and confidence — things that can be delivered through policy, communication, and smart procurement.
And notice how the top three requests — guidance, usability, and data security assurance — form an interconnected whole. You can’t provide clear guidance without knowing what’s safe to use. You can’t assure data privacy without governance infrastructure. And you can’t make tools easy to use if public servants are paralyzed by uncertainty about whether using them will get them in trouble. Solving one without the others doesn’t work.
Why Embedding Matters More Than Anything Else
The U.S. scored lowest on embedding (39/100), and the index data shows why that’s the metric that matters most.
Across all countries, 61% of workers in high-embedding environments report benefits from using AI for advanced or technical work, compared with just 17% where embedding is low. Embedding also levels the playing field across age groups: in high-embedding environments, 58% of public servants aged 55 and older report saving over an hour of time using AI, compared with just 16% in low-embedding settings. When AI is woven into the systems people already use, adoption stops being about tech-savviness and starts being about everyone getting better at their jobs.
The U.S. currently sits at the opposite end of this spectrum. The index describes “minimal formal infrastructure, with few supporting structures, limited investment, and significant barriers to integration with existing systems.” Until embedding improves, the productivity gains that AI promises will remain concentrated among a small group of early adopters rather than lifting the entire workforce.
Three Priorities That Could Change the Trajectory
The index points to three specific actions that could rapidly lift AI adoption across U.S. public services if pursued together.
First, set a clear mandate from the top — backed by approved, secure infrastructure. Public servants need consistent, visible reassurance that AI use is encouraged, supported, and aligned with public service values. But permission without protection is reckless. Agencies should deploy enterprise AI solutions with data protection agreements, governance controls, and comprehensive logging — ensuring sensitive data never leaves the private network. Platforms like Kiteworks’ Secure MCP Server demonstrate how this can work in practice: enabling AI productivity across tools like Claude, ChatGPT, and Copilot while maintaining the data governance controls federal agencies require. When public servants know the tools they’re using are approved, compliant, and monitored, the cultural permission follows naturally.
Second, build confidence through evidence and incident readiness. Many U.S. public servants have not yet seen clear, role-specific benefits from AI. Sharing concrete examples of where AI has reduced administrative burden, improved service delivery, or supported better decision-making would help make AI tangible rather than abstract. But confidence also requires knowing what happens when something goes wrong. Consider the scenario: A public servant accidentally pastes thousands of citizen Social Security numbers into a public AI tool for data analysis. The data is now in the provider’s systems — potentially stored indefinitely or exposed to other users. Can the agency answer what was exposed, when, by whom, and what other sensitive data has been shared? Without immutable audit logs, SIEM integration for real-time monitoring, and chain of custody documentation, the answer is no. Incident response capabilities for AI-specific scenarios aren’t optional — they’re the price of admission for responsible adoption.
Third, embed practical, role-specific training and guidance. Awareness of AI is high in the U.S., but confidence is not. Short, practical training tailored to specific roles can bridge that gap. This means explicit permission for low-risk tasks — writing, research, summarizing, brainstorming — along with role-specific guidance that shows public servants how AI supports tasks they already do. Templates, shared prompts, and worked examples make adoption concrete. Partnering with trusted technology providers can help deliver training at scale while also providing the assurance around security and data protection that public servants are asking for.
The Stakes Are Higher Than Rankings
The U.S. ranking seventh in this index is embarrassing, but the real cost isn’t reputational. It’s operational. Every day that public servants lack secure, approved AI tools is another day of government data flowing through personal accounts with no oversight. Every week without clear guidance is another week of productivity gains left on the table. Every month without embedded AI governance is another month where the gap between the U.S. private sector and its public sector widens.
Shadow AI is already here. Seventy percent of public servants use AI; many are doing it outside approved channels. Restricting access creates more risk, not less. The tools exist — what’s missing is secure, approved infrastructure paired with cultural permission and clear guidance.
The 301 U.S. public servants surveyed in this index are sending a clear message: Give us the guidance, give us the secure tools, and get out of the way. The question is whether government leaders are listening — and whether they’re willing to solve the shadow AI problem before it becomes a full-blown data security crisis.
Frequently Asked Questions
The Public Sector AI Adoption Index 2026 is a global study by Public First for the Center for Data Innovation, sponsored by Google. It surveyed 3,335 public servants across 10 countries — including 301 in the United States — to measure how AI is experienced in government workplaces. The index scores countries across five dimensions: enthusiasm, education, empowerment, enablement, and embedding, each on a 0–100 scale. It goes beyond measuring whether governments have AI strategies and examines whether public servants have the tools, training, permissions, and infrastructure to use AI effectively in their daily roles.
The United States ranks 7th out of 10 countries with an overall score of 45 out of 100. It scores highest on education (50/100) and enablement (45/100), reflecting available training and tool access, but scores lowest on embedding (39/100), meaning AI is rarely integrated into everyday workflows. The U.S. falls behind advanced adopters like Saudi Arabia (66), Singapore (58), and India (58), as well as South Africa (55) and Brazil (49). The index characterizes the U.S. as an “uneven adopter” — a country with strong AI foundations and agency-level activity but slower diffusion into confident, everyday use by frontline public servants.
Shadow AI refers to public servants using unapproved AI tools — often personal accounts for services like ChatGPT — for work tasks without their organization’s knowledge or oversight. The Public Sector AI Adoption Index found that in low-enablement environments, 64% of enthusiastic AI users rely on personal logins at work and 70% use AI without their manager knowing. This creates serious security risks for government agencies: Sensitive citizen data (PII/PHI, tax records, law enforcement information) may be ingested into public large language models with no audit trail, no data protection controls, and no ability to determine what was exposed in the event of a breach. Shadow AI also creates potential compliance violations across HIPAA, FISMA, and state privacy laws.
According to the index, U.S. public servants identified clear, practical guidance on applying AI in the public sector (38%), easier-to-use tools that don’t require specialist technical skills (36%), and assurance of data privacy and security (34%) as their top three priorities. Training tailored to specific roles (30%) and better integration with existing systems (29%) also ranked highly. Notably, dedicated budget for AI projects ranked last at just 12%, and senior management endorsement came in at only 20%. This suggests the primary barriers to adoption are not financial but structural — public servants need clarity on what’s allowed, tools that are secure and intuitive, and confidence that using AI won’t create compliance or career risk.
The index data — and the experience of advanced adopter countries — suggests agencies need to shift from restricting AI access to enabling it securely. This means deploying approved enterprise AI tools with built-in data governance controls, such as platforms that keep sensitive data within the private network while enabling productivity with AI assistants like Claude, ChatGPT, and Copilot. Agencies should implement Data Security Posture Management (DSPM) to classify sensitive data and enforce policies automatically, maintain immutable audit logs for all AI-data interactions, and establish incident response capabilities specific to AI data exposure scenarios. Solutions like Kiteworks’ Secure MCP Server, which is FedRAMP Moderate Authorized and aligned with the NIST AI Risk Management Framework, demonstrate how agencies can enable AI productivity without sacrificing data security or compliance.
Saudi Arabia (66/100), Singapore (58/100), and India (58/100) are the top-ranked countries in the index. Each took a different path but shared common elements: clear rules on what public servants can and cannot use AI for, approved and secure tools provided through the organization, and visible leadership support that framed AI as modernization rather than risk. Singapore built centralized platforms with standardized guidance through its Smart Nation initiative. Saudi Arabia executed a top-down national strategy tied to Vision 2030 with enterprise-wide AI rollout. India drove adoption through cultural momentum with free government-hosted AI courses and consistent positive messaging. None of these countries had better underlying AI technology than the United States — they succeeded by making it easier and safer for public servants to say yes to AI every day.