Gartner Just Said What We’ve Been Thinking About AI Governance
I’ve been reading Gartner reports for longer than I’d like to admit. Most of them confirm what practitioners already suspect. Some challenge assumptions. And occasionally, one lands that makes you stop and think: finally, someone said it out loud.
The December 2025 report from Avivah Litan, Max Goss, and Lauren Kornutick, “Enterprises Should Augment Microsoft Purview and Agent 365 With Independent AI TRiSM Controls,” is that kind of report.
The title alone is remarkable. Gartner isn’t saying Microsoft’s new AI data governance capabilities are bad. They’re saying they’re insufficient. And they’re being pretty direct about why.
The Lock-in Problem Nobody Wants to Talk About
Here’s the thing about Microsoft’s approach to AI governance: it works really well if you live entirely inside the Microsoft ecosystem. Entra Agent ID, the new Purview AI agents, the security dashboard, these are legitimate advances. I mean that sincerely.
But Gartner points out something that should be obvious but apparently isn’t: the deepest auditing and protection features require agents to be registered in Entra ID. If they’re not registered? You get basic visibility at best. And registration is opt-in. Without proper access controls, you’re flying blind.
Think about that for a second. Your AI data protection strategy depends on AI agents voluntarily registering themselves.
The report also notes that at least 50% of organizations maintain E3 licenses for many employees, while the advanced Purview controls have historically required E5. So even within Microsoft’s own ecosystem, there’s a governance gap based on licensing tier. That feels… problematic.
Actually, let me reconsider that. It’s not just problematic, it creates a two-tier security posture within the same organization. Some users get governed AI; others don’t. Meeting regulatory compliance requirements becomes nearly impossible under those conditions. That’s not a sustainable model.
“Vendor Safeguards Stop at Their Own Borders”
This is probably the most important line in the entire report. Gartner states plainly that no cloud provider can enforce runtime control over AI agents once they operate across another provider’s environment. Microsoft can’t govern what happens in AWS. AWS can’t govern what happens in Azure. Nobody governs what happens on-premises unless you put something there that does.
I’ve been saying variations of this for years about data governance and data sovereignty generally, so it’s validating to see Gartner apply the same logic to AI agents. But it’s also concerning how few organizations seem to be connecting these dots.
When your sensitive data leaves the Microsoft tenant, and it will, because that’s how business works, Purview’s controls don’t follow. Your MIP sensitivity labels? They’re metadata. They’re instructions. But without something enforcing those instructions at the destination, without proper DRM controls, they’re just… labels.
The Guardian Agent Concept
Gartner introduces this idea of “independent guardian agents” that can supervise AI agents across platforms and clouds. They even include a diagram showing guardian agents sitting above platform-specific agent frameworks, providing enterprise-wide oversight.
What strikes me about this framing is how closely it maps to what we’ve been building with the Private Data Network concept. Not because we were thinking about AI agents specifically, we weren’t, at least not initially, but because the underlying problem is the same: how do you govern sensitive data when it moves through systems you don’t control?
Our answer has been: create an independent enforcement layer. Don’t rely on the source platform to govern the data. Don’t rely on the destination platform. Put governance in the middle, where you can apply consistent policy regardless of origin or destination. This zero trust security approach ensures protection travels with the data.
Now that I think about it, this is exactly what Gartner is describing for AI governance. They’re just using different vocabulary.
What Actually Matters for AI Agent Governance
Let me be specific about what I think organizations need, based on this report and on conversations we’ve been having with CISOs over the past year. The CISO Dashboard visibility we provide reflects these same priorities.
First, you need authentication and identity that works across platforms. We built our MCP Server with OAuth 2.0 specifically because AI agents accessing enterprise data need proper IAM verification and MFA. Every. Single. Time. And those credentials can never, I can’t emphasize this enough, never be exposed to the LLM context. Prompt injection attacks are real, and they’re only going to get more sophisticated.
Second, you need policy enforcement that operates at runtime, not just at configuration time. It’s not enough to set a policy and hope it holds. Every data access needs evaluation against current conditions: who’s the user, where are they, what’s the sensitivity of the data, what action are they attempting? That’s ABAC (attribute-based access control), and it needs to happen in real-time. RBAC alone isn’t sufficient for this level of granularity.
Third, and this is where I get a bit passionate, you need audit logs that actually capture everything. A complete audit trail is non-negotiable. The report mentions that Microsoft 365 can delay log entries up to 72 hours and throttles logging during high activity periods. That’s exactly when you need complete logs. Attacks generate activity spikes. If your logging throttles during spikes, you’re blind when it matters most.
The Insider Threat Reality
Gartner dedicates significant space to insider threats, noting that sophisticated insiders can still exfiltrate data via rogue AI agents by pausing the GSA client, using unmanaged devices, or running unregistered agents. This is why a robust incident response capability matters.
This resonates with me because it matches what we hear from customers. The external threat gets all the attention, but the insider risk, whether malicious or accidental, is a significant and often underestimated exposure. Effective DLP strategies must account for both. And AI agents create new vectors. An insider who might hesitate to email themselves a sensitive document might not think twice about asking an AI assistant to “help analyze” that same document and export the insights.
Our approach has been to assume breach and design accordingly. SafeVIEW lets users see data without possessing it, the file never downloads, they’re looking at a watermarked image stream. SafeEDIT is similar for editing, the document lives on our servers, the user edits via streamed video of a virtual desktop. This possessionless editing approach means even if someone’s compromised, the data doesn’t leave the controlled environment.
Is this friction? Yes. But it’s appropriate friction for sensitive data. Proper data classification combined with the Data Policy Engine decides dynamically when that friction applies based on the data, the user, and the context.
On Not Being Microsoft
I want to be clear about something: I’m not anti-Microsoft. We integrate with Microsoft extensively. Our customers use SharePoint and OneDrive and Outlook via our Microsoft Office 365 plugin. We propagate MIP sensitivity labels and enforce policies based on them.
But Gartner is right that depending entirely on Microsoft for governance creates security risk management challenges. Not because Microsoft is doing anything wrong, but because no single vendor can govern everything. The report’s language is worth quoting: “Only a neutral trusted guardian agent layer could enforce universal registration and routing to close the gap no single provider can fix alone.”
That’s what we’re trying to be. Not a replacement for Microsoft’s governance capabilities, but an augmentation. A layer that extends those capabilities beyond the tenant boundary, into external collaboration via Kiteworks secure file sharing, into third-party AI systems through our AI Data Gateway, into the messy reality of how organizations actually operate.
What I’m Still Uncertain About
I don’t have all the answers here. The AI risk governance space is moving fast, faster than any of us can fully track. Microsoft’s preview capabilities will reach GA eventually. Other platforms will develop their own approaches. The Model Context Protocol that our MCP Server implements is from Anthropic, and we’re betting it becomes a standard, but I could be wrong about that.
What I’m confident about is the architectural principle: independent governance layers that don’t depend on any single vendor for enforcement. A true zero trust architecture approach seems right regardless of how specific technologies evolve.
And I’m confident that the organizations taking AI governance and data compliance seriously now, before a breach forces them to, are going to be in much better shape than those waiting for vendors to solve it for them.
The Analyst Conversation
Speaking of which, if you’re reading this and you have access to these Gartner analysts, I’d encourage you to engage with them on this topic. Avivah Litan in particular has been thinking about AI risk and TPRM for a long time, and the TRiSM framework is substantive work. We’re planning to brief all three analysts on how we’re approaching these problems, and I suspect those conversations will shape our roadmap in ways I can’t predict.
That’s how it should work, I think. The vendors who only talk to analysts when they want coverage are missing the point. The real value is in the dialogue.
And to learn more about how Kiteworks addresses these challenges, contact us or schedule a custom demo today.