France’s Public Sector AI Adoption Crisis: Last Place Revealed
France has done almost everything right on paper. A National AI Strategy since 2018. Billions invested through France 2030. World-class research institutions. A ‘trusted AI’ framework closely aligned with the EU AI Act. A stated commitment to technological sovereignty and public service modernisation.
Key Takeaways
- France Ranks Dead Last in Public Sector AI Adoption — Despite Being an Early Mover. France scored just 42 out of 100 on the Public Sector AI Adoption Index, placing 10th out of 10 countries surveyed. This is despite launching its National AI Strategy in 2018, investing heavily through France 2030, and building one of Europe's strongest AI research ecosystems. Strategy has not translated into frontline practice.
- Nearly Half of French Public Servants Have Never Used AI at Work. Around 45% of French public sector workers report never having used AI tools in their role — the weakest adoption profile in the entire index. Over half say AI use has stagnated or declined over the past year, pointing to a workforce that isn't just cautious — it's disengaging.
- Two-Thirds of French Public Servants Have Received Zero AI Training. 66% report receiving no training on how to use AI. More than 1 in 3 are unclear about what they can and cannot use AI for at work. Without training or guidance, even willing public servants have no on-ramp to adoption — and the shadow AI risks that come with unsupported experimentation grow.
- France Is Trapped in a Self-Reinforcing Cycle of Low Adoption. Weak embedding limits adoption. Low adoption limits visible benefits. The absence of benefits sustains low optimism. France is the least optimistic country in the index, with only 33% of public servants feeling positive about AI in the public sector. Almost 1 in 3 believe nothing they do at work can be accomplished by AI.
- Nearly 6 in 10 French Public Servants Have Never Even Discussed AI With a Colleague. 58% say they have never discussed using AI with a colleague or witnessed colleagues expressing excitement about it. Without peer discussion, shared learning, or visible success stories, there are almost no informal pathways through which confidence or curiosity about AI can develop.
And yet, when it comes to how public servants experience AI in their daily work, France finished last.
The Public Sector AI Adoption Index 2026, released by Public First for the Center for Data Innovation with sponsorship from Google, surveyed 3,335 public servants across 10 countries — including 342 in France. France scored 42 out of 100, placing 10th out of 10. Behind Japan. Behind Germany. Behind every other country in the study.
For a nation that has positioned itself as a European AI leader, that result demands explanation. The data provides one — and it has less to do with technology than with what happens when strategy never reaches the people it’s supposed to serve.
The Numbers Behind France’s Last-Place Finish
The index measures how public servants experience AI across five dimensions: enthusiasm, education, enablement, empowerment, and embedding. For France, every single score tells the same story — a workforce that has been left behind by its own government’s ambition:
- Enthusiasm: 46/100 — France is the least optimistic country in the index. Only 33% of public servants feel positive about AI in the public sector. Almost 1 in 3 believe nothing they do at work can be accomplished by AI.
- Education: 46/100 — Two-thirds (66%) of public servants report receiving no AI training whatsoever. More than 1 in 3 are unclear about what they can and cannot use AI for at work.
- Empowerment: 39/100 — Over 2 in 5 are unsure whether their workplace even has a policy governing AI use. More than 50% disagree that leaders provide clear communication or direction on AI.
- Enablement: 42/100 — Only 27% say their organisation has invested in AI tools. Access to enterprise-grade or in-house AI tools is minimal. Technical support is often absent.
- Embedding: 36/100 — the lowest embedding score in the entire index. Organisational structures to support scaling are weak. AI is isolated from routine workflows rather than integrated into them.
45% of French public sector workers have never used AI in their role. Over half say AI use has stagnated or declined in the past year. 58% have never even discussed AI with a colleague.
These aren’t the numbers of a workforce that has weighed AI and found it wanting. These are the numbers of a workforce that has barely been given the chance to try.
The Shadow AI Risk France Isn’t Talking About
Here is the global finding from the index that French government security leaders need to confront.
In low-enablement environments across the index, 64% of enthusiastic AI workers report using personal logins at work, and 70% use AI for work tasks without their manager knowing.
France’s enablement score is 42/100. Its empowerment score is 39/100. Only 27% of organisations have invested in AI tools. More than 2 in 5 public servants don’t know if their workplace has an AI policy. That is precisely the environment where shadow AI takes root.
The index notes one important nuance for France: In Germany’s compliance-oriented culture, unclear rules tend to discourage use altogether. In France, the picture is more mixed — while overall adoption is low, those who are using AI are often doing so without organisational support or visibility. Nearly 1 in 3 public servants say their workplace actively makes it difficult to use AI where it would be helpful, pointing to a workforce where motivated individuals work around institutional barriers rather than through them.
Think about what this means in practice. Public servants using personal ChatGPT or Mistral accounts to draft policy documents, summarise casework, or analyse datasets containing citizen information. Sensitive data — protected under GDPR, the EU AI Act, and France’s Loi Informatique et Libertés — potentially being ingested into public large language models with no audit trail, no data classification controls, and no ability to determine what was exposed after the fact.
The irony is familiar from every country in this index. Organisations trying to be cautious about AI by restricting access or staying silent on permissions aren’t preventing AI use. They’re driving it underground — creating far more AI risk than organisations that provide approved tools with clear usage guidance.
This is where the conversation needs to shift from “should we allow AI” to “how do we enable AI securely.” Solutions like <a href="Kiteworks’ Secure MCP Server represent the kind of infrastructure that can bridge this gap — enabling AI data governance productivity with tools like Claude, ChatGPT, and Copilot while keeping sensitive data within the private network. Existing governance frameworks (RBAC/ABAC) extend to all AI interactions, every AI operation is logged for compliance and forensics, and sensitive content never leaves the trusted environment. For French government organisations, alignment with GDPR, the EU AI Act, and France’s national data privacy framework means these protections map directly to existing compliance obligations.
The alternative — relying on strategy documents and ethical frameworks while public servants figure things out on their own — is what produced a last-place finish.
The Vicious Cycle: Why France Is Stuck
The index identifies a self-reinforcing pattern in France that doesn’t exist in the same form anywhere else in the study.
Weak embedding limits adoption. Low adoption limits visible benefits. The absence of visible benefits sustains low optimism. And low optimism removes the cultural pressure that might otherwise push organisations to invest in tools, training, and guidance.
The numbers trace this cycle clearly. With only 27% of organisations investing in AI tools and minimal integration with existing systems, most French public servants have never seen AI save time, improve decisions, or enhance service delivery. Its value remains abstract. And when value feels abstract, there’s no urgency to act.
Compare this to what the advanced adopters have built. In Singapore, 73% of public servants are clear on what they can and cannot use AI for, and 58% know exactly who to ask when they hit a problem. In Saudi Arabia, 65% access enterprise-level AI tools and 79% use AI for advanced or technical tasks. In India, 83% are optimistic about AI and 59% want it to dramatically change their daily work.
Those countries broke through the cycle by making AI tangible — giving public servants tools, training, and permission simultaneously rather than sequentially. France has the strategy. It has the ethical framework. What it hasn’t done is put AI in front of public servants in a way that lets them see what it can do.
The Missing Layer: AI Data Governance for French Government
France’s strong emphasis on trusted AI and alignment with the EU AI Act creates a robust policy foundation. But policy frameworks alone don’t protect citizen data when public servants are using unapproved tools without oversight.
Most French government organisations lack visibility into what data is being shared with AI systems. Which public servants are using AI, and for what purposes? Whether AI-generated outputs contain sensitive information that shouldn’t be shared externally? How to enforce data classification policies when AI tools are involved? For most organisations, the answer is “we don’t know.”
This is where AI data governance frameworks become essential — not as yet another compliance layer, but as the infrastructure that makes confident adoption possible within France’s existing regulatory compliance requirements. Data security posture management (DSPM) capabilities can discover and classify sensitive data across repositories, including data being ingested into AI systems. Automated policy enforcement can block privileged or confidential data from AI ingestion based on classification labels. Comprehensive audit logs can track all AI-data interactions. And when aligned with GDPR, the EU AI Act, and France’s Loi Informatique et Libertés, these capabilities help organisations govern AI risk throughout the data life cycle.
Kiteworks’ approach to this challenge is instructive. By integrating DSPM with automated policy enforcement and immutable audit logging, organisations can tag data by sensitivity level and enforce those classifications automatically when AI tools are involved. Every AI-data interaction is captured with user ID, timestamp, data accessed, and the AI system used. For France — where overlapping regulatory compliance requirements between the EU AI Act, GDPR, and national law raise the perceived risk of experimentation — this kind of infrastructure turns compliance from a barrier into an enabler.
What French Public Servants Need to Break the Cycle
The data from across the index is consistent: Public servants aren’t asking for sweeping new programmes or massive budgets. They’re asking for clarity, usability, and confidence.
In Germany — France’s closest peer in the cautious adopter tier — public servants cite data privacy assurance (38%) and clear guidance on how to apply AI (37%) as the top factors that would encourage greater use. The pattern holds across every country: Clear guidance, easier-to-use tools, and data security assurance consistently rank as the top three enablers. Dedicated budget ranks last.
In France specifically, the index points to an even more fundamental gap: relevance. Many public servants don’t see how AI applies to their role. 31% believe nothing they do at work can be accomplished by AI. 44% say only a small part of their work could benefit. Fast, practical training focused on concrete use cases — drafting, analysis, case management, service delivery — is essential to close this perception gap before scepticism hardens into permanent disengagement.
Why Embedding Matters More Than Anything Else
France scored 36/100 on embedding — the lowest score of any country on any dimension in the entire index. And the global data shows why that’s the metric that matters most.
Across all countries, 61% of workers in high-embedding environments report benefits from using AI for advanced or technical work, compared with just 17% where embedding is low. Embedding also levels the playing field across age groups: In high-embedding environments, 58% of public servants aged 55 and older report saving over an hour of time using AI, compared with just 16% in low-embedding settings.
France sits at the extreme low end of this spectrum. Until AI is woven into the systems and workflows public servants already use, the productivity promise of France’s AI strategy will remain entirely theoretical — and the vicious cycle of low adoption, low visibility, and low optimism will continue unchallenged.
Three Priorities That Could Break France Out of Last Place
The index points to three actions that could begin to reverse France’s trajectory if pursued together — and with urgency.
First, move fast to give clear permission and provide secure infrastructure. Public servants need unambiguous signals — now — that using AI is expected, supported, and safe. Continued ambiguity is reinforcing hesitation and disengagement. Senior leaders must clearly state that AI can and should be used for everyday, low-risk tasks, backed by simple guidance that removes fear of noncompliance. But permission without protection creates risk. Organisations should deploy enterprise AI solutions with AI data protection controls, governance frameworks, and comprehensive logging. Platforms like Kiteworks’ Secure MCP Server demonstrate how this works in practice: enabling AI productivity across tools like Claude, ChatGPT, and Copilot while maintaining the AI data governance controls French government organisations require under GDPR, the EU AI Act, and national law.
Second, rapidly connect AI to real jobs and real tasks — with incident response readiness built in. The core problem in France is relevance. Many public servants don’t see how AI applies to their role. Fast, practical training focused on concrete use cases — drafting, analysis, case management, service delivery — is essential. Unless AI is quickly shown to save time or improve outcomes, scepticism will harden. At the same time, organisations need incident response capabilities for AI-specific scenarios. Consider a public servant accidentally pasting thousands of citizen records into a public AI tool. Can the organisation answer what was exposed, when, by whom, and what other data has been shared? Without immutable audit logs, SIEM integration, and chain-of-custody documentation, the answer is no.
Third, actively rebuild workplace culture around AI. Low discussion and low visibility are reinforcing disengagement. 58% of French public servants have never discussed AI with a colleague. Leaders need to create space for experimentation, shared learning, and visible success stories that show AI working in practice. Governed sandboxes, workplace competitions, and peer-to-peer learning networks can begin to create the cultural momentum that France currently lacks entirely. Without urgent cultural change, even strong policy frameworks will fail to translate into adoption — as the index has now demonstrated.
The Stakes Are Higher Than Rankings
France finishing last in this index is more than embarrassing — it’s a warning. Every day that public servants lack secure, approved AI tools is another day of citizen data flowing through personal accounts with no oversight. Every week without clear guidance is another week where the gap between France’s AI ambition and its public sector reality widens. Every month without embedded AI data governance is another month where the vicious cycle of low adoption and low optimism entrenches itself further.
Shadow AI doesn’t require high adoption to create risk. It only requires a few motivated individuals working around institutional barriers with sensitive data and no guardrails. In France — where 1 in 3 public servants say their workplace actively makes it difficult to use AI where it would be helpful — those conditions are firmly in place.
The 342 French public servants surveyed in this index are sending a clear message: Give us the guidance, give us the tools, and show us that AI is relevant to the work we do every day. The question is whether French government leaders are willing to match their world-class AI strategy with the frontline execution it demands — before last place becomes a permanent position.
Frequently Asked Questions
The Public Sector AI Adoption Index 2026 is a global study by Public First for the Center for Data Innovation, sponsored by Google. It surveyed 3,335 public servants across 10 countries — including 342 in France — to measure how AI is experienced in government workplaces. The index scores countries across five dimensions: enthusiasm, education, empowerment, enablement, and embedding, each on a 0–100 scale. It goes beyond measuring whether governments have AI strategies and examines whether public servants have the tools, training, permissions, and infrastructure to use AI effectively in their daily roles.
France ranks last — 10th out of 10 countries — with an overall score of 42 out of 100. It scores lowest on embedding (36/100) and empowerment (39/100), reflecting minimal integration of AI into everyday workflows and unclear governance around AI use. France falls behind all other countries in the index, including Germany (44), Japan (43), the U.S. (45), U.K. (47), and advanced adopters like Saudi Arabia (66), Singapore (58), and India (58). The index classifies France as a “cautious adopter” — a country with strong national strategy and ethical frameworks but persistent failure to translate these into confident, everyday AI use by frontline public servants.
The index data points to an execution gap rather than a strategy gap. France has invested heavily in AI research, talent, and policy frameworks since 2018, and its ‘trusted AI’ framework is closely aligned with EU AI Act requirements. However, this has not translated into frontline practice. 45% of public servants have never used AI at work. 66% have received no training. Only 27% say their organisation has invested in AI tools. Over 50% say leaders don’t provide clear direction on AI. The result is a self-reinforcing cycle: Weak embedding limits adoption, low adoption limits visible benefits, and the absence of benefits sustains low optimism — with France recording the lowest optimism of any country at just 33%.
Shadow AI refers to public servants using unapproved AI tools — often personal accounts for services like ChatGPT — for work tasks without their organisation’s knowledge or oversight. The index found that in low-enablement environments globally, 64% of enthusiastic AI users rely on personal logins at work and 70% use AI without their manager knowing. France’s low enablement (42/100) and empowerment (39/100) scores, combined with 1 in 3 public servants saying their workplace actively makes it difficult to use AI, create conditions where motivated individuals work around institutional barriers. This puts citizen data at risk under GDPR, the EU AI Act, and France’s Loi Informatique et Libertés, with no audit trail or forensic capability to assess exposure.
The index data suggests organisations need to shift from restricting AI access to enabling it securely. This means deploying approved enterprise AI tools with built-in AI data governance controls, such as platforms that keep sensitive data within the private network while enabling productivity with AI assistants like Claude, ChatGPT, and Copilot. Organisations should implement data security posture management (DSPM) to classify sensitive data and enforce policies automatically, maintain immutable audit logs for all AI-data interactions, and establish incident response capabilities specific to AI data exposure scenarios. Solutions like Kiteworks’ Secure MCP Server, aligned with GDPR, the EU AI Act, and France’s Loi Informatique et Libertés, demonstrate how organisations can enable AI productivity without sacrificing data security or compliance.
Saudi Arabia (66/100), Singapore (58/100), and India (58/100) are the top-ranked countries. Each took a different path but shared common elements: clear rules on what public servants can and cannot use AI for, approved and secure tools provided through the organisation, and visible leadership support that framed AI as modernisation rather than risk. Critically, these countries made AI tangible by giving public servants tools, training, and permission simultaneously — breaking the cycle of low visibility and low optimism that has trapped France. France has comparable strategic ambition and stronger ethical frameworks than most, but has not yet matched this with the frontline tool access, clear permissions, and practical training that advanced adopters have delivered.