Brazil’s Self-Taught AI Workforce: Enthusiasm Outpaces Governance
Brazil’s public servants didn’t wait for permission. They didn’t wait for training programmes. They didn’t wait for enterprise licences or IT helpdesk approvals. They taught themselves AI, started using it at work, and — according to the data — they’re getting results.
The problem is that they’re doing all of this without guardrails.
Key Takeaways
- Brazil Has the Most Self-Taught AI Workforce in the World — and That’s the Problem. 67% of Brazilian public servants say their AI knowledge is entirely or mostly self-taught — the joint-highest share in the index. 52% describe their learning as entirely self-taught. This isn’t a training success story. It’s a workforce that taught itself because no one else stepped up — and is now using AI on government data with no formal guidance, no approved tools, and no audit trail.
- Brazil Ranks 5th Overall but Dead Last on Enablement. Brazil scored 49 out of 100 overall — placing it 5th in the index, ahead of the U.K., U.S., Germany, Japan, and France. But it scored just 41/100 on enablement, the lowest of any country. Over 60% say their organisation doesn’t provide the tools or resources needed to use AI effectively. More than 1 in 5 have no access to AI tools at work at all.
- Enthusiasm Is Sky-High, but Institutions Aren’t Keeping Up. 83% of Brazilian public servants describe AI as effective. 89% say it saves time. 65% are optimistic about AI in the public sector. But 68% say leaders fail to provide clear direction on AI use, and 49% wouldn’t know who to ask for help if something went wrong. The gap between enthusiasm and institutional readiness is the widest in the index.
- 63% Started Using AI at Work Within the Past Year — Mostly on Their Own. AI adoption in Brazil is recent, fast-moving, and almost entirely self-initiated. Unlike advanced adopters where uptake is driven by state strategy and institutional rollout, Brazil’s adoption is grassroots. This creates massive momentum — and massive governance risk, as informal use outpaces every institutional safeguard. As agentic AI systems gain the ability to act autonomously, that governance risk compounds.
- Almost 2 in 5 Brazilian Public Servants Aren’t Confident They’re Using AI Within Policy. 39% are not confident they are using AI in line with their workplace’s policies. 1 in 4 feel their workplace is making it difficult to use AI where it would be helpful. When individual accountability outpaces institutional governance, the risk doesn’t fall on the organisation — it falls on the public servant.
The Public Sector AI Adoption Index 2026, released recently by Public First for the Center for Data Innovation with sponsorship from Google, surveyed 3,335 public servants across 10 countries — including 382 in Brazil. Brazil scored 49 out of 100, placing 5th in the index. That puts it ahead of the U.K. (47), the United States (45), Germany (44), Japan (43), and France (42).
On paper, that’s a strong result. But dig into the dimension scores and a very different picture emerges — one where the workforce has raced ahead of the institutions that are supposed to govern them.
The Numbers That Reveal Brazil’s Paradox
The index measures how public servants experience AI across five dimensions: enthusiasm, empowerment, enablement, embedding, and education. For Brazil, the scores tell the story of the widest enthusiasm-to-infrastructure gap in the entire study:
- Enthusiasm: 60/100 — the fourth-highest score in the index. 65% of Brazilian public servants feel optimistic about AI. 83% describe it as effective. 89% say it saves time.
- Education: 54/100 — some training exists, but around half of public servants report receiving no formal AI training. 67% say their AI knowledge is entirely or mostly self-taught — the joint-highest share in the index.
- Empowerment: 46/100 — 68% say leaders fail to provide clear communication and direction on AI use. Almost 2 in 5 are not confident they’re using AI in line with workplace policies.
- Enablement: 41/100 — the lowest enablement score of any country in the index. Over 60% say their organisation doesn’t provide the tools or resources needed to use AI effectively. More than 1 in 5 report having no access to AI tools at work. Only 15% say their organisation is using the most suitable AI tools for their work.
- Embedding: 44/100 — early or uneven integration, with AI use dependent on local initiative rather than systemic support.
63% of Brazilian public servants started using AI at work within the past year. Most did it on their own. And 49% say they wouldn’t know who to approach for help if they encountered a problem.
This is a workforce that has built its own AI capability from scratch — and is now operating at scale with almost no institutional infrastructure underneath it.
The Shadow AI Crisis Hiding Behind the Enthusiasm
Here is the global finding from the index that Brazilian government security leaders cannot afford to ignore.
In low-enablement environments across the index, 64% of enthusiastic AI workers report using personal logins at work, and 70% use AI for work tasks without their manager knowing.
Brazil has the lowest enablement score in the index (41/100) and one of the highest enthusiasm scores (60/100). That is the exact combination that produces the most acute shadow AI risk of any country in the study.
Think about what this means in practice. Public servants across federal, state, and municipal governments using personal ChatGPT, Gemini, or other AI accounts to draft documents, analyse datasets, summarise casework, and process citizen information. Sensitive data — including information protected under Brazil’s General Data Privacy Law (LGPD) and ANPD rules on cross-border data transfers — potentially being ingested into public large language models with no audit trail, no data classification controls, and no ability to determine what was exposed after the fact.
The risk is compounding. As AI evolves beyond simple chatbots into agentic systems — autonomous AI that can reason, act, and interact with enterprise resources independently — the consequences of ungoverned access multiply. A misconfigured AI agent can leak thousands of sensitive records in minutes, far faster than any human insider. Every agent deployed creates a non-human identity requiring API access and machine-to-machine authentication that traditional identity management systems were not designed to handle. In Brazil, where adoption is already outpacing institutional controls, the arrival of agentic AI turns an existing governance gap into an urgent structural vulnerability.
And here’s what makes Brazil’s situation uniquely dangerous: Using public AI tools may shift legal accountability from the institution to the individual. With 39% of public servants already unsure whether their AI use aligns with workplace policies, and no approved systems or clear guidelines in place, individual public servants are absorbing compliance risk that should sit with the organisation.
This is where the conversation needs to shift from celebrating Brazil’s enthusiasm to securing it. The infrastructure required to bridge this gap needs to enable AI productivity with tools like Claude, ChatGPT, and Copilot while keeping sensitive data within a private network. Existing governance frameworks (RBAC/ABAC) must extend to all AI interactions — including those initiated by autonomous agents — every AI operation must be logged for compliance and forensics, and sensitive content must never leave the trusted environment. Kiteworks’ Secure MFT Server is one example of this approach in practice, keeping AI interactions within the boundaries of a Private Data Network where every operation is secured with OAuth 2.0 authentication and governed by existing organisational policies. For Brazilian government organisations, alignment with the LGPD and ANPD’s regulatory framework means these protections map directly to existing compliance obligations — including the evolving requirements under PL 2338/2023.
The alternative — letting a self-taught workforce continue to operate on public AI tools with government data and no oversight — is not enthusiasm. It’s a data protection incident waiting to happen.
The Interest-Infrastructure Gap: Why Brazil’s Strength Is Also Its Vulnerability
Brazil’s story in this index is unlike any other country’s. In the U.S. and U.K., the challenge is ambivalence — public servants who have access to some tools but haven’t been given the confidence to use them. In France and Germany, the challenge is inertia — workforces that barely engage with AI at all. In the advanced adopters — Singapore, KSA, India — enthusiasm is matched by institutional infrastructure.
Brazil sits in a category of its own: massive enthusiasm, deep self-taught capability, and almost no institutional support.
The numbers tell this story vividly. Brazil has the strongest digital foundations in Latin America — the gov.br platform, widespread digital ID, and PIX have created a digitally engaged population and rich data environment. At the national level, the 2024–2028 National AI Plan, aligned with Brazil’s G20 presidency, places explicit emphasis on using AI to improve public services. At the subnational level, states like Goiás, Paraná, and Piauí are pioneering AI legislation and regulatory sandboxes.
But these top-level investments haven’t reached the frontline. 61% of Brazilian public servants say their organisation fails to provide what they need to use AI effectively. Only 15% say their organisation is using the best AI tools for their work. 49% wouldn’t know who to approach for help. Access is largely confined to publicly available tools, with limited availability of enterprise or in-house systems.
The result is a public sector where AI adoption is booming — but entirely unmanaged. And in data protection terms, unmanaged enthusiasm is more dangerous than no enthusiasm at all.
The Missing Layer: AI Data Governance for Brazilian Government
Brazil’s regulatory landscape is already complex — and getting more so. Compliance with the LGPD, ANPD rules on cross-border data transfers, and the potential enactment of PL 2338/2023 create layers of obligation that government organisations must navigate. When AI is adopted informally, on personal accounts, without organisational governance, these obligations become impossible to meet.
Most Brazilian government organisations lack visibility into what data is being shared with AI systems. Which public servants are using AI, and for what purposes? Whether AI-generated outputs contain sensitive citizen information? How to enforce data classification policies when AI tools are involved? For most organisations, the answer is that they have no way to know — because the AI use is happening outside their systems entirely.
This visibility gap becomes even more urgent as AI shifts from passive tools to active agents. Agentic AI systems don’t wait for prompts — they execute multi-step processes, access databases, and interact with external APIs with substantial independence. Each agent creates a non-human identity that needs to be secured, and most government identity management systems are not equipped to handle that at scale. Data-layer security with zero-trust governance, context-aware authorisation, and unified visibility across every interaction — whether initiated by a human or an AI agent — is no longer optional.
This is where AI data governance frameworks become essential — not as a barrier to the momentum Brazil has built, but as the infrastructure that makes that momentum sustainable and compliant. Data security posture management (DSPM) capabilities can discover and classify sensitive data across repositories, including data being ingested into AI systems. Automated policy enforcement can block privileged or confidential data from AI ingestion based on classification labels. Comprehensive audit logs can track all AI-data interactions. And when aligned with the LGPD and ANPD’s regulatory framework, these capabilities help organisations govern AI risks throughout the data life cycle.
The capabilities needed to close this gap are clear: integration of DSPM with automated policy enforcement and immutable audit logging. Every AI-data interaction should be captured with user ID, timestamp, data accessed, and the AI system used. Kiteworks’ Private Data Network delivers this approach, combining these capabilities into a unified platform with AI-powered anomaly detection that flags suspicious activity — like an agent suddenly requesting large volumes of data it doesn’t normally access. For Brazil, where individual adoption has outpaced institutional readiness, this kind of infrastructure doesn’t slow momentum — it protects it.
What Brazilian Public Servants Are Telling Their Government
The index data reveals a workforce that is not asking to be held back. They’re asking to be supported.
49% say they wouldn’t know who to approach for help if they had a problem with AI. 61% say their organisation doesn’t provide what they need. Almost 2 in 5 aren’t confident they’re using AI within policy. 1 in 4 feel their workplace actively makes it difficult to use AI where it would be helpful.
The global data on what encourages more frequent AI use is consistent across every country: clear guidance, easier-to-use tools, and data security assurance rank as the top three enablers. Dedicated budget ranks near the bottom. Brazilian public servants have already demonstrated they’ll adopt AI without budget, without training programmes, and without enterprise licences. What they need now is the organisational infrastructure that makes their existing use safe, compliant, and effective.
Why Enablement Is Brazil’s Make-or-Break Dimension
Brazil scored 41/100 on enablement — the lowest of any country in the index. And the global data shows why enablement matters so much for shadow AI risk.
In low-enablement organisations globally, 33% of public servants who use AI in their personal lives never use it at work — showing how gaps in access prevent familiar tools from translating into public sector productivity. But in Brazil, the dynamic is reversed: Public servants are using AI at work despite having no institutional support, on personal tools, with government data. That’s not an enablement gap — it’s a governance emergency.
Across all countries, 61% of workers in high-embedding environments report benefits from advanced AI use, compared with just 17% where embedding is low. Brazil’s embedding score (44/100) reflects early progress, but the index makes clear that embedding without enablement is fragile — built on individual initiative that can stall or create risk at any moment.
Three Priorities That Could Secure Brazil’s Momentum
The index points to three actions that could convert Brazil’s grassroots AI adoption into sustainable, institution-led transformation if pursued together — and fast.
First, expand access to trusted, secure AI tools and core infrastructure. Brazil’s weakest score is enablement. Public servants are relying on personal or publicly available tools because their organisations haven’t provided alternatives. Public policies that expand access to approved, enterprise-grade AI tools — alongside the necessary cloud and data infrastructure — would bring informal use into the open and under governance. This is especially critical as agentic AI systems enter government workflows, since autonomous agents require the same zero-trust governance as human users — with the added need for machine-to-machine authentication, sandboxed execution, and real-time anomaly detection. Platforms like Kiteworks’ Secure MFT Server demonstrate how to deliver this: enabling AI productivity across tools like Claude, ChatGPT, and Copilot while keeping sensitive data within the private network, with full compliance logging and alignment with the LGPD and ANPD’s framework. When approved tools are as easy to use as personal accounts — but secure, logged, and compliant — adoption doesn’t slow down. It gets safer.
Second, pair practical training with clear AI use policies and incident readiness. While awareness and optimism are high, most public servants lack formal training or confidence that their AI use is supported. Short, practical, role-specific training should be combined with clear AI use policies that create a safe harbour for everyday tasks. Clear guidance on what’s permitted, how data should be handled, and where to seek support would help bring informal AI use into the open. And organisations need incident response capabilities in parallel. Without immutable audit logs, SIEM integration, and chain-of-custody documentation, the self-taught AI use already happening across Brazilian government creates unquantifiable compliance risk under the LGPD.
Third, create clear pathways from experimentation to scale. AI adoption in Brazil is driven by individual initiative, and that creates a major opportunity. High levels of enthusiasm and self-directed use point to strong potential for a culture of learning, experimentation, and peer-to-peer discovery. To unlock this at scale, public servants need clearer organisational structures and a clear mandate from government in the form of legislation and regulatory sandboxes. As AI tools evolve toward agentic capabilities, these sandboxes should include provisions for governing autonomous AI systems — ensuring agents operate within defined boundaries before being deployed at scale. States like Goiás and Paraná are already leading here — expanding these models nationally would give Brazil’s grassroots adoption the institutional scaffolding it needs to become sustainable, system-wide transformation.
The Stakes Are Higher Than Rankings
Brazil ranking 5th in this index is impressive — but the number masks a ticking clock. Every day that self-taught public servants operate on personal AI tools with government data is another day of compliance exposure under the LGPD. Every week without approved enterprise tools is another week where citizen data flows through systems the government can’t see, audit, or control. Every month without clear AI governance is another month where the legal accountability that should sit with institutions falls instead on individual public servants who are just trying to do their jobs better. And as AI agents become more autonomous and more prevalent, the attack surface grows in tandem.
Brazil’s public servants have done something remarkable: They’ve adopted AI faster and more enthusiastically than their counterparts in the U.S., U.K., Germany, and France — without being asked, without being trained, and without being given the tools. That’s a testament to the ingenuity and drive of Brazil’s public workforce.
But ingenuity without infrastructure is a risk, not a strategy. The 382 Brazilian public servants surveyed in this index have already shown they’ll embrace AI. The question is whether their government will meet them with the secure tools, clear policies, and data governance they need — before enthusiasm becomes liability.
Frequently Asked Questions
The Public Sector AI Adoption Index 2026 is a global study by Public First for the Center for Data Innovation, sponsored by Google. It surveyed 3,335 public servants across 10 countries — including 382 in Brazil — to measure how AI is experienced in government workplaces. The index scores countries across five dimensions: enthusiasm, empowerment, enablement, embedding, and education, each on a 0–100 scale. It goes beyond measuring whether governments have AI strategies and examines whether public servants have the tools, training, permissions, and infrastructure to use AI effectively in their daily roles.
Brazil ranks 5th out of 10 countries with an overall score of 49 out of 100. It scores highest on enthusiasm (60/100), reflecting widespread optimism and positive experiences with AI, but lowest on enablement (41/100) — the weakest enablement score of any country in the index. This means Brazilian public servants are enthusiastic and self-motivated but severely lacking in organisational tools, support, and infrastructure. Brazil sits ahead of the U.K. (47), U.S. (45), Germany (44), Japan (43), and France (42), but behind advanced adopters like Saudi Arabia (66), Singapore (58), India (58), and South Africa (55).
Brazil’s adoption is almost entirely self-initiated rather than institution-led. 67% of public servants say their AI knowledge is entirely or mostly self-taught. 63% started using AI at work within the past year. But over 60% say their organisation doesn’t provide the tools or resources needed to use AI effectively. More than 1 in 5 have no access to AI tools at work, and only 15% say their organisation is using the most suitable AI tools. The result is a public sector where AI is adopted enthusiastically by individuals but without enterprise tools, formal governance, or organisational support — creating the widest enthusiasm-to-infrastructure gap in the index.
Shadow AI refers to public servants using unapproved AI tools — often personal accounts — for work tasks without organisational knowledge or oversight. The index found that in low-enablement environments globally, 64% of enthusiastic AI users rely on personal logins and 70% use AI without their manager knowing. Brazil has the lowest enablement score (41/100) and one of the highest enthusiasm scores (60/100) — the exact combination that produces the most acute shadow AI risk. As AI evolves toward agentic systems that act autonomously, these risks compound — ungoverned agents can cause data exposure far faster than any human user. Sensitive citizen data protected under Brazil’s LGPD and ANPD rules may be ingested into public AI models with no audit trail, no data classification controls, and no forensic capability. Further, using public AI without approved systems may shift legal accountability from the institution to the individual public servant.
Organisations should shift from informal, self-directed AI use to secure, institutionally supported adoption. This means deploying approved enterprise AI tools with built-in data governance controls — platforms that keep sensitive data within the private network while enabling productivity with AI assistants like Claude, ChatGPT, and Copilot. Data security posture management (DSPM) should classify sensitive data and enforce policies automatically. Immutable audit logs should track all AI-data interactions. And incident response capabilities must be in place before scaling. As agentic AI enters the picture, organisations also need zero-trust controls for non-human identities, sandboxed execution environments, and real-time anomaly detection for machine-speed operations. Solutions like Kiteworks’ Secure MFT Server, aligned with Brazil’s LGPD and ANPD regulatory framework, demonstrate how organisations can support the AI momentum their workforce has already built while ensuring compliance and protecting citizen data.
Saudi Arabia (66/100), Singapore (58/100), and India (58/100) are the top-ranked countries. India’s story is most relevant to Brazil — both countries show high enthusiasm and fast-moving adoption. But India paired its enthusiasm with the government’s “AI for All” strategy, free government-hosted courses, and consistent positive messaging that created institutional momentum alongside individual initiative. Singapore and Saudi Arabia provided centralised platforms, approved tools, and clear governance from the top. Brazil’s opportunity is to follow a similar path: converting its world-leading grassroots enthusiasm into institutionally supported, securely governed adoption — before informal use creates compliance exposure that undermines the progress public servants have already made.