Healthcare Has AI Governance Committees Everywhere. Almost Nobody Knows What AI Is Actually Running.

Healthcare loves a committee. Stand up a governance body, draft a charter, assign cross-functional representatives, schedule quarterly meetings. Check the box.

The problem is that committees don’t govern anything if they can’t see what’s happening. And right now, most healthcare organizations cannot see what AI is doing inside their environments.

That’s the central finding from the 2026 Healthcare Cybersecurity Benchmarking Study, announced by Censinet at ViVE 2026 in Los Angeles. The study — delivered in partnership with the American Hospital Association, Health-ISAC, the Health Sector Coordinating Council, the Scottsdale Institute, and The University of Texas at Austin — surveyed a range of healthcare organizations across cybersecurity maturity and AI governance readiness. The results paint a picture of a sector that has built impressive governance scaffolding around AI while leaving the operational foundation unfinished.

The numbers are stark. Seventy percent of healthcare organizations have established AI governance committees. Only 30% maintain an enterprise-wide AI inventory. Over half have no documented method for detecting when vendors embed AI into existing products. And 64% are already experimenting with or deploying agentic AI — autonomous systems that can reason, act, and interact with enterprise resources independently.

Censinet CEO Ed Gaudet framed the disconnect directly: healthcare has built the governance scaffolding for AI, but the operational muscle — inventory, asset management, detection methods, and clear accountability — is not keeping pace with adoption.

This matters for a sector that handles some of the most sensitive data in existence. When AI systems process protected health information without inventory tracking, without audit trails, and without clear ownership, the result is not just a compliance gap. It’s a patient safety risk. And it is precisely this gap — between governance structure and operational enforcement — that data governance platforms like Kiteworks are designed to close.

5 Key Takeaways

  1. Healthcare Is Better at Reacting to Attacks Than Preventing Them. The 2026 Benchmarking Study confirms a pattern that should alarm every healthcare CISO: organizations have strengthened incident response capabilities, but foundational preventive controls — governance, asset management, and supply chain preparedness — continue to lag behind. Healthcare is investing in cleaning up after breaches while underinvesting in stopping them from happening.
  2. AI Governance Committees Exist. Operational Controls Do Not. Seventy percent of healthcare organizations have established AI governance committees. Only 30% maintain an enterprise-wide AI inventory. That’s a 40-point gap between having a governance structure and actually knowing what AI systems are operating inside the organization. Committees without visibility are governance theater. Closing this gap requires operational infrastructure — like comprehensive audit trails and data classification enforcement — that gives governance committees the real-time visibility they need to enforce their own policies.
  3. Shadow AI From Vendors Is the Blind Spot Nobody Is Watching. Over half of healthcare organizations have no documented methodology for detecting when vendors embed AI capabilities into existing products. This means AI may be processing protected health information right now — through tools the organization already approved — without anyone knowing it. Approval gates designed for the original product don’t cover what the product becomes after a vendor updates it with AI. Continuous monitoring of vendor data access patterns detects behavioral changes that signal new capabilities, and comprehensive audit trails document exactly what data vendors access across every channel.
  4. Agentic AI Is Already in Production — Governance Is Not. Sixty-four percent of healthcare organizations are either experimenting with or actively deploying agentic AI. Only 8% have drawn a hard line against it. Among organizations reporting that AI adoption is outpacing their readiness, more than half say they need better formal governance procedures above all else. The adoption horse has left the barn. The governance fence is still being built. Effective governance requires least-privilege data access, purpose-bound restrictions, continuous verification, and the audit infrastructure that turns governance policies into enforced controls.
  5. Rural Health Systems Face the Same Threats With a Fraction of the Resources. Rural-only health systems match their peers on NIST Cybersecurity Framework fundamentals but are twice as likely to have no AI governance at all. They face the same cyber threats and the same pressure to adopt AI as the largest health systems in the country — without the budgets, staff depth, or specialized expertise to manage the risk. The AI governance gap in rural healthcare is not a technology problem. It’s a resource problem that demands scalable solutions that deliver enterprise-grade governance without requiring dedicated governance staff.

Stronger at Response, Weaker Where It Counts

The benchmarking study reveals a maturity imbalance that has persisted across multiple years of research. Healthcare organizations have made measurable progress in their ability to respond to cybersecurity incidents. Detection capabilities have improved. Incident response plans are more mature. Recovery procedures are better documented.

But the preventive side of the equation — the controls that stop breaches from happening in the first place — continues to lag. Governance structures remain incomplete. Asset management practices don’t account for the full scope of digital infrastructure, particularly AI workloads. Supply chain preparedness is underdeveloped despite the sector’s heavy reliance on third-party vendors and business associates.

This pattern is not unique to healthcare, but it is particularly dangerous in healthcare. Criminal and nation-state-supported cyberattacks continue to target the sector’s critical infrastructure. The rapid integration of AI, without adequate oversight, introduces new risk vectors that connect directly to patient safety and care delivery. John Riggi, National Advisor for Cybersecurity and Risk at the American Hospital Association, put it plainly: cybersecurity and AI governance are no longer separate disciplines. To defend one is to defend all.

The practical implication is that healthcare organizations need to rebalance their cybersecurity investment. Response capabilities are necessary but insufficient. Prevention — which means governance, asset visibility, access controls, and supply chain oversight — is where the underinvestment creates the most exposure.

The 40-Point Gap Between Governance and Visibility

Seventy percent with AI governance committees. Thirty percent with AI inventory. That 40-point gap is the single most important number in the 2026 study.

An AI governance committee without an AI inventory is a board meeting without an agenda. The committee can set policies, approve use cases, and define risk thresholds — but it cannot enforce any of those decisions if it doesn’t know what AI systems exist, what data they access, who authorized them, or what actions they take.

The inventory problem is not just about counting AI tools. It’s about understanding the data flows. Which AI systems have access to protected health information? What data was used to train or fine-tune models? Which vendor products now include AI capabilities that weren’t present when the original contract was signed? Who authorized each AI system’s access, and under what conditions can that access be revoked?

Without answers to these questions, governance committees are operating blind. They can produce policies. They cannot verify compliance with those policies. And in a regulatory environment governed by HIPAA, the NIST AI Risk Management Framework, and increasingly sector-specific AI requirements, the inability to demonstrate operational compliance is itself a material risk.

Closing this gap requires operational infrastructure that most governance committees do not have. Comprehensive audit trails track every AI interaction with healthcare data, centralized logging records who accessed what, when, and through which system, and data classification capabilities enforce access policies automatically rather than relying on manual review. The governance committee provides the strategy. Operational execution infrastructure turns strategy into enforceable controls and auditable evidence.

Shadow AI Is Not Coming From Employees. It’s Coming From Your Vendors.

The traditional shadow AI narrative focuses on employees using unauthorized AI tools — personal ChatGPT accounts, unapproved browser extensions, free-tier platforms. That risk is real and well-documented.

But the Censinet study surfaces a different kind of shadow AI that is harder to detect and potentially more dangerous: AI capabilities quietly embedded into products and platforms that healthcare organizations have already approved.

Over half of healthcare organizations have no documented methodology for detecting this. A vendor updates an existing platform with AI-powered analytics. A business associate integrates machine learning into a data processing pipeline. An EHR module adds AI-driven clinical decision support. In each case, the product was approved through the organization’s standard procurement process — before the AI capability existed. The governance committee reviewed and approved a product. The product changed. The review did not happen again.

This creates a specific compliance risk under HIPAA. If a vendor’s AI capability processes protected health information in ways not covered by the existing business associate agreement, the organization may be exposed without knowing it. The data is flowing. The AI is processing. And the audit trail that would demonstrate compliance — or reveal the violation — does not exist.

Detecting vendor-embedded AI requires continuous monitoring of data access patterns, not one-time assessments. Kiteworks monitors all channels through which vendors interact with organizational data — email, file sharing, APIs, SFTP transfers, and direct system access — through a unified audit infrastructure. When vendor accounts exhibit unusual behavior — sudden changes in data volume, access frequency, or query complexity — Kiteworks flags the deviation immediately. This gives security teams the visibility to identify when a vendor product has changed in ways that affect how protected health information is being processed.

Committees and approval gates alone cannot solve this problem. Detection requires continuous, operational monitoring that tracks data access in real time and surfaces the deviations that signal new, unreviewed capabilities.

Agentic AI Is Already Here. The Governance Framework Isn’t.

Perhaps the most consequential finding in the 2026 study is the pace of agentic AI adoption. Sixty-four percent of healthcare organizations are either experimenting with or actively deploying agentic AI systems — autonomous AI that can execute multi-step processes, access databases, interact with APIs, and make operational decisions with limited human oversight.

Only 8% have drawn a hard line against agentic AI adoption. The remaining organizations are somewhere on the spectrum between exploration and production deployment.

Agentic AI introduces a fundamentally different risk profile than the chatbots and analytics tools that dominated earlier waves of healthcare AI adoption. A chatbot answers questions based on data it’s given. An agentic AI system acts on that data — retrieving records, updating systems, triggering workflows, and interacting with external services. Each agent creates a non-human identity that requires authentication, authorization, and monitoring at a level most healthcare identity management systems were not designed to handle.

The governance implications are significant. Every agentic AI system needs to be inventoried. Its data access must be logged in audit trails. Its permissions must follow least-privilege principles with continuous verification — not authenticate-once-access-forever. Its actions must be auditable. And clear ownership must be assigned so that when something goes wrong — when an agent accesses data it shouldn’t, or takes an action outside its intended scope — there is an accountable party and an escalation path.

Kiteworks provides the operational infrastructure to govern agentic AI at the data layer. Through its Secure MCP Server, Kiteworks enables agentic AI to access enterprise data with OAuth 2.0 authentication and role-based access controls that ensure every agent inherits the permissions of its authorizing human principal — and cannot escalate beyond those boundaries. Purpose binding restricts each agent to the specific data classifications and functions it was authorized for. Continuous verification evaluates every data request against current policies. And comprehensive audit trails document every action the agent takes, creating the evidence trail that governance committees need to enforce their own policies.

Among organizations reporting that AI adoption is outpacing their readiness, more than half identify better formal governance procedures as their top need. The demand signal is clear.

Rural Healthcare: Same Risks, Different Reality

Brian Sterud, VP and CIO at Faith Regional Health Services, described the challenge facing rural health systems with characteristic directness: they face the same cyber threats and the same pressure to adopt AI as the largest systems in the country, but without the same budgets or the same bench depth.

The benchmarking data supports this. Rural-only health systems match their peers on NIST Cybersecurity Framework fundamentals — the basic blocking and tackling of cybersecurity. But they are twice as likely to have no AI governance at all. Not weaker governance. No governance.

This gap is not driven by awareness or willingness. It’s driven by resources. Rural systems typically lack dedicated AI governance staff, dedicated threat analysts, and the budget to deploy enterprise-grade governance platforms. They need solutions that deliver the same audit trails, policy enforcement, and compliance capabilities as large health systems — without requiring the same headcount or infrastructure investment.

Kiteworks delivers automated policy enforcement, pre-configured compliance templates aligned to HIPAA and NIST frameworks, and out-of-the-box audit capabilities that reduce the governance burden on small IT teams. Rural health systems get the same comprehensive data governance that the largest health systems deploy — without needing to build the infrastructure from scratch or hire dedicated governance staff to operate it.

Nobody Owns AI Risk. Everybody Assumes Somebody Else Does.

Thirty-eight percent of healthcare organizations either share AI risk responsibility across multiple groups without clear escalation paths or have not defined ownership at all. This is a structural vulnerability that will become more acute as AI deployments scale.

Fragmented risk ownership creates a specific failure mode: when an AI-related incident occurs — a data exposure, a compliance violation, an unauthorized access event — there is no clear path from detection to response. The CISO thinks the compliance team owns it. The compliance team thinks the AI governance committee owns it. The AI governance committee thinks the clinical informatics team owns it. Meanwhile, the incident escalates.

Resolving this requires unified visibility. When the CISO, the compliance officer, the privacy officer, and the AI governance committee all have access to the same comprehensive audit trail — the same record of what happened, when, and which system was involved — ownership disputes shrink because the facts are shared. Kiteworks provides this unified visibility through its consolidated activity log and CISO Dashboard, which give every stakeholder access to the same real-time audit data. Automated alerting with clear escalation rules ensures that incidents reach the right people immediately, regardless of organizational structure.

Janet Guptill, President and CEO of the Scottsdale Institute, captured the broader shift: cybersecurity and AI governance are no longer just technical challenges. They are strategic imperatives that require senior leadership engagement across the enterprise. From CISOs to CEOs to board members, the accountability for AI risk needs to be explicit, documented, and operationally enforced.

From Structure to Execution: What Healthcare Organizations Should Do Now

Build a centralized AI inventory and keep it current. Every AI system that touches organizational data — whether deployed internally, embedded in vendor products, or accessed through third-party APIs — must be identified, catalogued, and monitored. Comprehensive audit trails provide the foundation for this inventory by logging every data interaction across every channel, making it possible to identify when new AI systems or capabilities begin accessing organizational data.

Implement continuous monitoring for vendor-embedded AI. Move beyond point-in-time assessments. Kiteworks monitors vendor data access patterns continuously across email, file sharing, APIs, SFTP, and managed file transfer. When vendor behavior changes — signaling new AI capabilities — Kiteworks flags the deviation and documents the evidence. Update business associate agreements to require vendor disclosure of AI capability changes, and use audit trails to verify compliance.

Establish clear AI risk ownership with documented escalation paths. Assign explicit accountability for AI risk at the executive level. Kiteworks’ unified audit infrastructure ensures the CISO, compliance officer, privacy officer, and AI governance committee share access to the same data. Automated alerting with escalation rules ensures incidents reach the right people immediately. Define escalation procedures that specify who responds to AI-related incidents, under what conditions, and with what authority.

Apply zero-trust principles to all AI workloads. Every AI system and agent should operate under least-privilege access with continuous verification. Kiteworks enforces this through attribute-based access controls that evaluate data classification, agent identity, and intended use for every request. Purpose binding restricts AI access to specific data categories and functions. Continuous verification evaluates every data request against current policies — not authenticate once, access forever.

Invest in prevention, not just response. Rebalance cybersecurity spending toward the foundational controls the study identifies as lagging: governance maturity, asset management, data classification, and supply chain oversight. Kiteworks delivers the preventive infrastructure — automated policy enforcement, data classification, access controls, and vendor monitoring — that stops breaches from happening rather than documenting them after the fact.

Scale governance for resource-constrained environments. Rural and smaller health systems need governance solutions that deliver enterprise-grade audit trails, automated policy enforcement, and compliance reporting without requiring dedicated AI governance staff. Kiteworks provides this scalability: pre-configured compliance templates, automated enforcement, and out-of-the-box audit capabilities that reduce the governance burden on resource-constrained IT teams.

The Governance Gap Will Not Close Itself

The 2026 Healthcare Cybersecurity Benchmarking Study delivers a message that is uncomfortable but necessary: healthcare has done the easy part of AI governance. The committees exist. The charters are written. The conversations are happening.

The hard part — building the operational infrastructure that turns those committees into effective governance bodies — is where most organizations are falling short. AI inventory. Continuous monitoring. Audit trails. Vendor detection. Clear ownership. These are not aspirational goals. They are operational requirements in a sector where AI is already processing protected health information, where agentic AI systems are already acting autonomously, and where adversaries are already using AI to attack healthcare’s critical infrastructure.

Kiteworks provides the data governance foundation that makes AI governance operational. Comprehensive audit trails that prove controls are enforced. Continuous monitoring that detects vendor-embedded AI before it becomes a compliance violation. Least-privilege access controls that prevent AI systems from reaching data beyond their authorized purpose. And the scalable, automated infrastructure that makes enterprise-grade governance accessible to organizations of every size.

The organizations that close this gap will be the ones that moved beyond structure to execution. That built the operational muscle to match their governance scaffolding. That treated AI governance not as a compliance exercise but as a patient safety imperative — and deployed the operational infrastructure to back it up.

To learn how Kiteworks can help, schedule a custom demo today.

Frequently Asked Questions

A healthcare enterprise AI inventory needs to capture more than a list of approved tools. For each AI system, the inventory must document: what protected health information the system can access, including training data and inference inputs; who authorized the access and under what conditions it can be revoked; which vendor or business associate operates the system and what their contractual obligations are; the data classification of outputs the system produces; and whether the system operates autonomously or with human-in-the-loop oversight. Building the inventory requires continuous monitoring, not a one-time survey — because vendor products change, AI capabilities are embedded without announcement, and new deployments often bypass formal approval processes. Comprehensive audit trails that log every AI interaction with healthcare data provide the living record that keeps the inventory current, identifying when new systems or capabilities begin accessing data even when they weren’t formally declared. The inventory is also the prerequisite for HIPAA Security Rule compliance, which requires documented information system activity review for all systems processing electronic protected health information.

Under HIPAA, any vendor that creates, receives, maintains, or transmits protected health information on behalf of a covered entity is a business associate and must operate under a business associate agreement specifying the permitted uses and disclosures of that data. When a vendor embeds AI into an existing product, two problems typically arise. First, the existing BAA was negotiated before the AI capability existed and likely does not address how the AI processes PHI, whether PHI is used to train or improve the model, or how the AI’s outputs are protected. Second, if the AI capability was added through a product update — not a new procurement — the standard approval and BAA review process may not have triggered. This creates undocumented PHI exposure: the data is flowing to an AI system under terms that don’t govern that processing. Covered entities must require vendors to disclose AI capability changes, review BAAs to ensure AI processing is explicitly addressed, and use continuous monitoring to detect when vendor data access patterns change in ways that indicate new AI activity. Audit trails documenting vendor data access across every channel — SFTP, APIs, managed file transfer, email — provide the evidence needed to identify undisclosed processing and demonstrate compliance when regulators ask.

Standard identity and access management is designed around human users: authenticate a person at login, assign permissions to their role, and log their sessions. Agentic AI breaks this model in three ways. First, agents create non-human identities that may authenticate thousands of times per hour, making session-based review impractical. Second, agents act on data rather than reading it — they retrieve records, update systems, trigger workflows, and call external APIs — so the blast radius of over-privileged access is far larger than for human users who can be individually monitored. Third, agents may operate under the permissions of a service account, which often holds broader access than any individual human user would be granted. Healthcare organizations need to add four capabilities to their existing IAM infrastructure: purpose binding that restricts each agent to specific data categories and functions regardless of what its credentials technically allow; continuous verification that re-evaluates every data request against current policy rather than relying on session-level authentication; kill switches that can immediately revoke agent access when anomalous behavior is detected; and audit trails that log every agent action at the data level — not just authentication events — tied to the authorizing human principal. Attribute-based access control that evaluates agent identity, data classification, and intended purpose simultaneously provides the governance layer that role-based systems alone cannot.

In clinical environments, zero-trust for AI workloads means treating every data request — from any AI system, regardless of its deployment context — as untrusted until evaluated against current policy. This is a significant departure from how most healthcare AI is currently deployed, where approval at the time of procurement is treated as ongoing authorization. Operationally, zero-trust for AI requires four elements. Least-privilege data access by default: AI systems can only access the specific PHI classifications required for their defined clinical function — a clinical decision support tool accesses diagnosis data, not billing records. Continuous verification: every query is checked against the AI’s authorized purpose and the data’s sensitivity classification, not just at login. Data loss prevention controls: automated enforcement that blocks AI systems from exporting, transmitting, or processing data outside defined boundaries. And behavioral anomaly detection: baseline monitoring that flags when an AI system’s data access pattern deviates from its established norm — unusual query volumes, access to new record types, or off-hours activity — triggering automated alerting and, where necessary, automated access suspension. In a sector where ransomware attacks increasingly target clinical AI systems as pivot points into broader infrastructure, zero-trust at the data layer is the control that limits blast radius when perimeter defenses fail.

Regulatory accountability for AI governance in healthcare is converging from multiple directions. HIPAA‘s Security Rule requires covered entities to implement policies and procedures for information system activity review — an obligation that extends to AI systems processing electronic PHI and requires documented evidence that reviews occur, not just that policies exist. The NIST AI Risk Management Framework recommends that AI governance include senior leadership accountability with defined roles, escalation paths, and board-level reporting. HHS’ 405(d) Health Industry Cybersecurity Practices, which underpin the benchmarking study’s framework, identify governance and risk management as foundational practices with explicit ownership requirements. The Censinet study’s finding that 38% of organizations have fragmented or undefined AI risk ownership is directly incompatible with these requirements: a regulator examining a PHI breach involving an AI system will expect to see who owned the risk, what oversight existed, what audit records were maintained, and what escalation procedures were followed. Board-level accountability requires exportable compliance reports that demonstrate operational governance — not governance documents — and a unified data governance infrastructure that gives every executive stakeholder access to the same evidence base when regulators ask.

Additional Resources

Get started.

It’s easy to start ensuring regulatory compliance and effectively managing risk with Kiteworks. Join the thousands of organizations who are confident in how they exchange private data between people, machines, and systems. Get started today.

Table of Content
Share
Tweet
Share
Explore Kiteworks