Canada’s AI Consultation Used AI to Analyze 64,600 Responses. Here’s What Got Lost in Translation.

In October 2025, Canadian AI Minister Evan Solomon launched what the government called a 30-day national sprint — the largest public consultation in the history of Innovation, Science and Economic Development Canada (ISED). The goal was ambitious: gather public and expert input to shape Canada’s next national AI strategy.

The consultation collected 64,600 responses to 26 questions from the Canadian public. A separate 28-member expert task force produced 32 papers and reports covering everything from commercialization and infrastructure to safety, trust, and global competitiveness.

Then the government did something that should make every data governance professional sit up straight: it used four AI models — Cohere Command A, OpenAI GPT-5 nano, Anthropic Claude Haiku, and Google Gemini Flash — to read, analyze, and summarize the results.

The resulting “what we heard” report, released quietly in early February 2026, compresses those tens of thousands of voices into a tidy government narrative. And as University of Ottawa law professor Michael Geist demonstrated in a detailed analysis, that narrative consistently softens, sanitizes, and reframes the experts’ most pointed warnings into the kind of balanced policy language that sounds reasonable but obscures the actual advice.

This isn’t just a Canadian policy story. It’s a AI data governance story — and it has direct implications for any organization using AI to process, summarize, or make decisions based on sensitive content.

5 Key Takeaways

  1. Canada Used AI to Summarize Its Own AI Strategy Consultation — and the Results Deserve Scrutiny. The Canadian government used four different AI models — Cohere Command A, OpenAI GPT-5 nano, Anthropic Claude Haiku, and Google Gemini Flash — to analyze 64,600 public responses to 26 questions. While AI enabled the government to compress months of analysis into weeks, the resulting summary consistently softened the experts’ most urgent warnings into balanced “government-speak.” When AI summarizes policy input, whoever sets the prompts controls the narrative.
  2. The Experts Said Execution, Not Research, Is Canada’s Real AI Problem The 28-member expert task force produced 32 reports that repeatedly emphasized the same message: Canada’s challenge is not a shortage of research talent or academic excellence. It’s the inability to commercialize, scale, and deploy AI at globally competitive levels. The government’s summary presented every policy pillar as an equal priority, creating an illusion of balanced consensus where the experts saw a five-alarm fire.
  3. Speed Was Framed as a Strategic Variable — But the Government Summary Ignored It. The expert reports frame speed as a competitive weapon: countries that move faster lead, while those that hesitate end up regulating what others have built. Some experts pointed directly at the government itself — slow procurement, delayed funding, and regulatory bottlenecks — as part of the problem. None of that self-criticism survived into the official summary.
  4. Trust and Safety Regulation Drew Sharp Disagreements the Summary Erased. The public consultation heavily emphasized AI safety, and the government appears headed toward governance frameworks, mandatory audits, and risk-based regulation. But the expert reports were far less unified. Some advocated moving quickly on regulation, while others warned that overly broad rules would disadvantage Canadian firms and regulate technologies Canada doesn’t control. The government’s summary presented trust as a settled consensus, not the contested policy battleground it actually is.
  5. The Biggest Data Governance Lesson Isn’t About AI Policy — It’s About AI-Processed Policy. When a government uses AI to summarize its own consultation results, it introduces the same data governance risks that enterprises face every day: lack of transparency into how inputs become outputs, no audit trail showing what was filtered or emphasized, and no ability for stakeholders to verify that their input was accurately represented. The process itself is a case study in why AI data governance matters.

What the Experts Actually Said vs. What the Government Published

The government made an unusual decision: it published all 32 expert reports alongside its own summary. That transparency created an opportunity for anyone willing to read the source material to compare what was submitted with what was reported.

Geist did exactly that. He uploaded all 32 documents to both ChatGPT and Perplexity AI and generated his own summaries of major themes and areas of disagreement. The divergences between his AI-generated summaries and the government’s AI-generated summary are telling.

The expert reports consistently frame Canada’s AI challenge as an execution problem, not a research problem. Whether the topic is commercialization, adoption, infrastructure, or scaling to globally competitive operations, the reports hammer the same message: Canada has world-class research but has failed to move beyond it. The government summary, by contrast, presents each policy pillar as a parallel priority with balanced objectives. Everything matters equally. Few trade-offs are acknowledged. The urgency that runs through the expert reports is methodically filed down.

The same pattern holds on speed. The expert reports frame speed as a strategic variable — countries that move faster lead, countries that hesitate get left to regulate what others have built. Several reports point directly at government itself as part of the problem: slow procurement cycles, delayed funding decisions, and regulatory approval processes that move at bureaucratic pace while competitors sprint. The government summary contains no acknowledgment that Canada’s own pace may be undermining its competitiveness, data sovereignty, and capacity to shape global AI norms.

On access to capital and government procurement, the divergence gets sharper. The expert reports describe Canada’s inability to scale AI firms as a structural constraint tied to the absence of domestic capital — not a minor concern but a fundamental bottleneck. Some experts argue that the country’s reliance on grants has become counterproductive, shielding companies from market discipline while failing to generate customers, revenue, or scale. Government procurement, several reports suggest, would be a far more effective industrial policy lever — compete for business, not handouts. The government summary refers to capital challenges indirectly without engaging with the political choices involved.

You Trust Your Organization is Secure. But Can You Verify It?

Read Now

The Trust and Safety Debate That Disappeared

Perhaps the most consequential divergence between the expert reports and the government summary involves trust and safety.

AI safety was a dominant theme in the public consultation responses, and the government is clearly headed toward building governance frameworks, mandatory audits, transparency requirements, and risk-based regulation into its national AI strategy. The public, understandably, wants guardrails.

But the expert reports are far less unified on how to build those guardrails. Nearly everyone agrees that trust is essential for adoption. The disagreements emerge on implementation. Some experts advocate for moving quickly on binding regulation. Others warn that overly broad rules will slow deployment, put Canadian firms at a competitive disadvantage, and attempt to regulate technologies that Canada does not control and did not build. Minister Solomon himself has described the regulatory philosophy as “light, tight, and right,” acknowledging that overregulation can chase companies and capital to friendlier jurisdictions.

Those disagreements largely vanish in the government’s summary, where trust is presented as a settled consensus objective rather than what it actually is: a contested policy domain with real trade-offs and legitimate opposing views. This isn’t consensus. It’s the appearance of consensus, manufactured through selective summarization.

From a Kiteworks perspective, the trust and safety debate highlights a critical gap that extends well beyond government policy. Every organization deploying AI faces the same tension: how do you enable AI-driven productivity while maintaining the governance controls that protect sensitive data? The answer isn’t to restrict AI access — the Canada consultation data shows that approach just drives usage underground. The answer is to build secure infrastructure that makes AI adoption possible within a governed framework. That means deploying a private content network where sensitive data flows through AI tools with complete visibility, granular access controls, and comprehensive audit trails — so organizations can say “yes” to AI without saying “yes” to uncontrolled AI risk.

When AI Summarizes AI Policy, Who’s Watching the Summarizer?

There’s a deeper AI data governance problem embedded in this story that goes beyond policy disagreements.

The Canadian government used AI to summarize input that will shape national AI policy. That creates a recursive trust problem: the tool being governed is also the tool doing the governing analysis. And the process raises exactly the same questions that enterprises face when they deploy AI to process sensitive content:

  • Transparency: What prompts were used to guide the AI analysis? Were the models instructed to find consensus, or to surface disagreements? Were they told to balance perspectives, or to faithfully represent the distribution of views? The public doesn’t know.
  • Auditability: There’s no published audit trail showing how the AI models transformed 64,600 raw responses into thematic summaries. No one outside government can verify whether the summarization accurately captured the input or systematically filtered it.
  • Reproducibility: Four different AI models were used. Were their outputs cross-validated? Did they agree? Where they diverged, how were conflicts resolved? The report doesn’t say.
  • Accountability: When an AI-generated summary softens expert warnings into bureaucratic prose, who is accountable? The model? The prompt engineer? The minister’s office? The chain of custody is opaque.

These aren’t hypothetical concerns. They’re the same data governance challenges that every enterprise confronts when AI touches sensitive content. And the Canadian government’s experience illustrates exactly why organizations need infrastructure that provides visibility into how AI processes data — not just what goes in, but what comes out and how it was transformed along the way.

Kiteworks addresses this gap through its approach to AI data governance. By routing sensitive content through a private content network with Data Security Posture Management (DSPM) capabilities, organizations can track exactly which data is being shared with AI systems, enforce data classification-based policies that prevent privileged content from being ingested without authorization, and maintain immutable audit logs that capture every AI-data interaction. The goal isn’t to block AI. It’s to ensure that when AI processes sensitive information, there’s a complete, verifiable record of what happened.

The Broader Lesson for Enterprise AI Governance

The Canada consultation story is a microcosm of a challenge that every organization faces as AI adoption accelerates.

The public and the experts both wanted meaningful input into a consequential policy decision. The government ran a large-scale consultation and used AI to analyze the results. The AI-generated summary diverged in important ways from the source material. And no one outside the process can verify whether the divergence was intentional, accidental, or simply an artifact of how large language models compress complexity into coherence.

Now map that to an enterprise context. Legal teams using AI to summarize contract negotiations. Finance departments using AI to analyze due diligence documents. HR teams using AI to process employee feedback. Compliance teams using AI to review regulatory submissions. In every case, the same questions apply: Is the summary faithful to the source? What was filtered out? Can you prove it?

The organizations that answer those questions well will be the ones that built the governance infrastructure before they needed it — not after an incident forced their hand. That means consolidating sensitive content communications into a governed platform with complete audit trails, applying zero-trust architecture principles to every AI-data interaction, and maintaining the kind of verifiable chain of custody that regulators, boards, and partners increasingly demand.

What Canada Got Right — and Where It Fell Short

To the government’s credit, it published the full expert reports alongside the summary. That transparency is exactly what data governance advocates push for: show your sources. Geist’s ability to compare the raw input against the processed output is possible only because the source documents were made public. Many governments — and many enterprises — wouldn’t have taken that step.

Where the process fell short was in the governance of the AI analysis itself. No methodology documentation explaining how the AI models were prompted, validated, or cross-checked. No acknowledgment of the limitations inherent in using AI to summarize complex, multi-perspective policy input. No independent verification of the summarization accuracy. And no clear accountability framework for the gap between what was submitted and what was reported.

The irony is difficult to miss: a consultation designed to inform a national AI strategy was itself undermined by insufficient AI governance. The very risks the experts warned about — the need for transparency, audit trails, and accountability in AI systems — were on full display in the process used to summarize their warnings.

The Bottom Line

Canada’s AI consultation is a cautionary tale for every organization that uses AI to process, analyze, or summarize sensitive information. The technology works. It can compress months of analysis into weeks. But without governance infrastructure — transparency into how inputs become outputs, audit trails that capture what was filtered or emphasized, and accountability for the gap between raw data and processed conclusions — the results can quietly diverge from reality in ways that are difficult to detect and impossible to verify after the fact.

The experts who participated in Canada’s consultation took their mandate seriously and provided candid, action-oriented advice. The question isn’t whether their input was valuable. It’s whether the AI-mediated summary accurately reflected it — and whether anyone can prove that it did.

For Kiteworks, this reinforces a principle that runs through everything we do: when sensitive content moves through AI systems, visibility is non-negotiable. That means complete audit trails, automated zero-trust data protection policy enforcement, and the ability to verify that what went in is faithfully represented in what comes out. Governments and enterprises alike need this infrastructure — not because AI is dangerous, but because AI without governance is a black box. And in a world where consequential decisions are increasingly shaped by AI-generated summaries, black boxes are a risk no organization can afford.

To learn how Kiteworks can help, schedule a custom demo today.

Frequently Asked Questions

In October 2025, Canada’s AI Minister Evan Solomon launched a 30-day national sprint to gather input on the country’s next national AI strategy. The consultation collected 64,600 public responses and 32 expert reports from a 28-member task force. The government used four AI models to analyze and summarize the results. The process matters for AI data governance because it demonstrates what happens when AI is used to process large volumes of sensitive, consequential input without adequate transparency, auditability, or independent verification. The divergences between the expert reports and the government’s AI-generated summary highlight risks that are directly relevant to any organization using AI to process sensitive content.

The government used Cohere Command A, OpenAI GPT-5 nano, Anthropic Claude Haiku, and Google Gemini Flash to read through 64,600 submissions and identify common themes. The AI analysis compressed what would normally be a months-long process into a matter of weeks. However, the methodology — including the prompts used, how model outputs were cross-validated, and how disagreements between models were resolved — was not published. This lack of transparency makes it impossible for the public to verify the accuracy of the summarization.

University of Ottawa law professor Michael Geist conducted a detailed comparison of the 32 expert reports against the government’s summary. Key divergences include the experts’ emphasis on execution over research as Canada’s real challenge, the framing of speed as a strategic variable with direct criticism of government slowness, the structural nature of Canada’s capital access problem, and sharp disagreements on trust and safety regulation that the summary presented as consensus. The government’s summary consistently softened urgency into balanced policy language.

Using AI to summarize policy input introduces several data governance risks: lack of transparency into how prompts shape outputs, no audit trail showing what was filtered or emphasized during summarization, inability for stakeholders to verify that their input was accurately represented, and unclear accountability when AI-generated summaries diverge from source material. These are the same risks that enterprises face when AI processes sensitive business content — contract analysis, due diligence, regulatory submissions, and employee feedback.

Kiteworks addresses AI data governance through its private content network approach. Organizations can route sensitive content through a governed platform that provides Data Security Posture Management (DSPM) to discover and classify sensitive data flowing to AI systems, automated policy enforcement that blocks privileged content from unauthorized AI ingestion, immutable audit logs capturing every AI-data interaction including user ID, timestamp, data accessed, and the AI system used, and zero-trust data exchange principles that apply consistent security controls regardless of which AI tools are involved. This infrastructure ensures organizations can adopt AI confidently while maintaining the visibility, accountability, and compliance that regulators and boards increasingly require.

Additional Resources

Get started.

It’s easy to start ensuring regulatory compliance and effectively managing risk with Kiteworks. Join the thousands of organizations who are confident in how they exchange private data between people, machines, and systems. Get started today.

Table of Content
Share
Tweet
Share
Explore Kiteworks