AI Compliance by Industry: A Regulatory Reference Guide

There is no universal AI compliance framework. Every organization deploying AI inherits the regulatory obligations attached to the data it processes — and those obligations vary dramatically by industry, data type, and jurisdiction. A defense contractor and a hospital deploying the same AI tool face entirely different compliance requirements. A financial services firm and a law firm face different frameworks still. The AI governance question is not “what does AI compliance require?” It is “what does AI compliance require for my organization, my data, and my use case?”

This reference guide maps the AI compliance landscape across six industries — federal contractors, financial services, healthcare, manufacturing, legal, and state and local government — to help compliance officers, CISOs, CIOs, and general counsel quickly identify which frameworks apply to their deployments, what the most consequential governance gaps are in their sector, and where to find deeper guidance.

One principle applies across every sector: regulators regulate data, not models. The compliance obligation is not determined by which AI vendor you use or what certifications it holds. It is determined by what data your AI agents access, what they do with it, and whether you can produce evidence of governance when an auditor or regulator asks.

Executive Summary

Main idea: AI compliance is sector-specific and framework-stacking — the regulatory obligations attached to your AI deployments are determined by the data your AI touches, not by the AI tool itself. This guide provides a cross-industry reference to the frameworks, requirements, and governance gaps that matter most for each major sector.

Why you should care: The same document summarization tool that is a low-risk productivity application for a general enterprise is a HIPAA compliance risk in healthcare, a CMMC compliance risk in defense contracting, and an attorney-client privilege risk in a legal context. Understanding which frameworks apply to your sector — before deployment, not after — is the foundational step in a defensible AI governance program.

Key Takeaways

  1. No single AI compliance framework applies universally — the frameworks governing your AI deployments are determined by the data they access, the industry you operate in, and the jurisdictions in which you do business.
  2. Most enterprise AI deployments are subject to multiple overlapping frameworks simultaneously — a defense manufacturer may face CMMC, ITAR, GxP, and NIS 2 for a single AI deployment; a financial services firm with EU operations faces SR 11-7, GLBA, NYDFS, and GDPR.
  3. The same four technical controls satisfy the evidentiary standard across virtually every AI compliance framework: authenticated AI agent identity linked to a human authorizer; operation-level ABAC access policy; FIPS 140-3 validated encryption; and tamper-evident audit trails feeding a SIEM.
  4. Compliance failures in AI governance are almost never the result of novel requirements — they are the result of failing to extend well-established data governance obligations (access controls, audit trails, encryption, minimum necessary access) to AI agents that are now performing functions previously performed by humans.
  5. The most dangerous AI compliance gap in every sector is not technical — it is organizational: AI deployed without a governance owner, an access scope definition, or audit trail infrastructure, creating regulated data exposure that regulatory enforcement can reach.

The Cross-Industry AI Compliance Framework

Before examining sector-specific requirements, three organizing principles apply across every industry covered in this guide.

Regulators regulate data, not models. HIPAA does not care whether PHI was accessed by a human nurse or an AI agent. CMMC does not distinguish between a cleared employee and an autonomous workflow touching CUI. The compliance obligation is identical — govern the data layer. The AI vendor’s certifications and system prompts operate at the model layer. Compliance auditors examine the data layer: who accessed what, under what authorization, with what encryption, and with what audit record.

Four technical controls satisfy virtually every framework. Across CMMC, HIPAA, GLBA, CJIS, GxP, GDPR, NYDFS Part 500, and every other framework in this guide, the required governance converges on the same four controls: authenticated AI agent identity linked to a human authorizer; ABAC policy at the operation level restricting AI to the minimum data necessary; FIPS 140-3 Level 1 validated encryption in transit and at rest; and a tamper-evident audit trail per interaction feeding a SIEM. Implementing these four controls satisfies the evidentiary standard across virtually all applicable frameworks simultaneously.

Compliance gaps follow a consistent pattern. Across every sector, AI governance failures follow the same pattern: AI deployed without extending existing data governance obligations to new AI-driven access patterns. The access controls, audit requirements, encryption standards, and minimum necessary standards implemented for human employee access rarely get extended to AI agents at the same time — and that gap is where compliance exposure accumulates.

What Data Compliance Standards Matter?

Read Now

Sector-by-Sector Regulatory Reference

Table 1: AI Compliance Requirements by Industry — Quick Reference
Industry Primary Frameworks Most Consequential AI Requirement Highest-Risk Compliance Gap
Federal Contractors CMMC 2.0 / NIST 800-171; DFARS; FedRAMP; ITAR; FISMA All 110 NIST 800-171 practices apply to AI accessing CUI — no AI exemption; FIPS encryption and operation-level audit logs required ITAR exposure through commercial AI tools routing controlled technical data through non-U.S.-person infrastructure
Financial Services SR 11-7; GLBA; NYDFS Part 500; PCI DSS; DORA; GDPR SR 11-7 model risk validation, ongoing monitoring, and human override documentation required for AI influencing financial decisions AI accessing NPI or cardholder data without operation-level access controls, producing no audit trail sufficient for regulatory examination
Healthcare HIPAA / HITECH; FDA CDS guidance; 21 CFR Part 11; GxP; EHDS; GDPR HIPAA minimum necessary access enforced at the operation level for AI accessing ePHI; BAAs required for AI vendors; FDA CDS device classification required before clinical AI deployment AI clinical tools misclassified as non-device CDS; missing BAAs with AI vendors; absent operation-level audit trails for AI-PHI interactions
Manufacturing CMMC 2.0; ITAR; GxP / 21 CFR Part 11; TISAX; NIS 2; ISO 27001 GxP CSV validation required for AI in regulated pharmaceutical and device manufacturing environments; ITAR exposure assessment required for AI touching controlled technical data AI tools in defense supply chain accessing CUI without CMMC controls; GxP validation gaps for AI in production environments
Legal ABA Model Rules 1.1, 1.6, 5.3; attorney-client privilege; eDiscovery (FRCP); client data protection agreements; GDPR; CCPA AI vendor access to privileged content must be assessed for waiver risk; TAR methodology must be documented and defensible; client data protection agreements require AI tool approval before use Commercial AI tools routing privileged communications to external infrastructure; undisclosed AI use violating client data protection agreements
State and Local Government CJIS; StateRAMP / FedRAMP; HIPAA; FERPA; State AI governance laws; Public records / FOIA CJIS requires FIPS encryption, MFA, and operation-level audit logs for every AI system accessing CJI; StateRAMP authorization required before cloud AI processes government data AI in adjudicative decisions without due process safeguards; commercial AI tools procured without StateRAMP or FedRAMP authorization verification

Federal Contractors: AI Compliance Under CMMC, ITAR, and FedRAMP

The CMMC 2.0 Final Rule applies to the entire DIB supply chain — Tier 2 and Tier 3 suppliers handling CUI face the same 110 NIST SP 800-171 practices as prime contractors, with no AI exemption.

AI agents accessing CUI must satisfy authenticated identity, least privilege access, FIPS-validated encryption, and tamper-evident audit logging. The DOJ Civil Cyber-Fraud Initiative has created False Claims Act exposure for contractors certifying CMMC compliance without implementing those controls for AI workflows.

Independently, ITAR compliance creates criminal export control exposure for defense manufacturers whose AI tools process controlled technical data through infrastructure not under U.S.-person control — a risk most manufacturers have not assessed for their commercial AI deployments.

FedRAMP authorization is required for cloud-hosted AI tools used in federal systems.

For a comprehensive treatment, see: AI Compliance Requirements for Federal Contractors: What You Need to Know.

Financial Services: AI Compliance Under SR 11-7, GLBA, NYDFS, and More

Financial services AI compliance is multi-framework by design. SR 11-7 requires validation, ongoing monitoring, and documented human override for AI models influencing financial decisions.

The GLBA Safeguards Rule’s 2023 amendments impose specific encryption, access control, and audit log requirements for AI accessing nonpublic personal information. NYDFS Part 500‘s 2023 amendments explicitly require AI systems to be included in cybersecurity programs — making it the most operationally specific U.S. financial regulation on AI governance.

PCI DSS requires unique AI agent identification, minimum necessary access, and continuous logging in cardholder data environments.

For EU-regulated institutions, DORA ICT risk requirements and GDPR automated decision-making obligations layer on top.

The same four technical controls — authenticated access, ABAC policy, FIPS encryption, and tamper-evident audit trails — satisfy the evidentiary standard across all six frameworks simultaneously.

For a comprehensive treatment, see: AI Compliance Requirements for Financial Services Firms: What You Need to Know.

Healthcare: AI Compliance Under HIPAA, FDA, GxP, and EHDS

HIPAA compliance‘s minimum necessary rule, access controls, audit requirements, and encryption standards apply fully to AI systems accessing ePHI — Business Associate Agreements with AI vendors are a legal prerequisite most organizations have not completed. 

The FDA’s clinical decision support guidance imposes a classification obligation unique to healthcare: AI must be assessed as non-device CDS or device software before clinical deployment, with device AI requiring FDA premarket review.

For pharmaceutical and medical device manufacturers, GxP compliance‘s Computer System Validation requirements apply to AI in regulated production and quality environments — and FDA inspections are actively examining CSV compliance for AI-enhanced systems.

The EU’s EHDS secondary use framework adds health data governance requirements for EU-operating organizations above GDPR’s general personal data protections.

For a comprehensive treatment, see: AI Compliance Requirements for Healthcare Organizations: What You Need to Know.

Manufacturing: AI Compliance Under CMMC, ITAR, GxP, and TISAX

Manufacturing is the most framework-stacking sector in this guide — a defense aerospace manufacturer may simultaneously face CMMC, ITAR, GxP, TISAX, NIS 2, and ISO 27001 for a single AI deployment.

The highest-priority gap for defense manufacturing is ITAR exposure through commercial AI tools — most manufacturers have not assessed whether AI in engineering and production workflows constitutes unlicensed export of controlled technical data, and the criminal penalties are severe.

For regulated manufacturers, GxP Computer System Validation requirements apply to AI in production and quality environments; AI systems that update their behavior as they process production data present specific CSV challenges that most organizations have not addressed in their validation frameworks before FDA inspections arrive.

For a comprehensive treatment, see: AI Compliance Requirements for Manufacturers: What You Need to Know.

Legal: AI Compliance Under ABA Model Rules, Privilege, and Client Agreements

Legal AI compliance is unique because its primary obligations flow from professional responsibility rules and fiduciary duties. ABA Model Rules 1.1, 1.6, and 5.3 impose competence, confidentiality, and supervision obligations on AI use in legal practice — obligations enforced through state bar ethics opinions and ABA Formal Opinion 512.

Attorney-client privilege can be waived by AI tools routing privileged communications to external infrastructure accessible to vendor personnel — a risk most law firms have not assessed for commercial AI deployments. 

Client data protection agreements from major institutional clients require explicit AI tool approval before any client data is processed — a requirement that is routinely not satisfied. eDiscovery TAR methodology must be documented, validated, and defensible when challenged; undocumented AI review creates sanctions exposure that proper methodology prevents.

For a comprehensive treatment, see: AI Compliance Requirements for Legal Departments and Law Firms: What You Need to Know.

State and Local Government: AI Compliance Under CJIS, StateRAMP, and Due Process

The FBI’s CJIS Security Policy applies to every AI system accessing criminal justice information — loss of CJIS access from non-compliance means loss of NCIC connectivity, which is operationally catastrophic. StateRAMP and FedRAMP compliance authorization is required for cloud-hosted AI tools processing government data — a requirement many agencies have not built into AI procurement.

Constitutional due process requirements apply when AI influences decisions affecting residents’ rights — benefits, licensing, pretrial detention — and courts are finding due process violations in AI-driven government decisions lacking adequate transparency and human review.

State AI governance laws in over 20 states add impact assessment, transparency, and human oversight requirements independent of federal frameworks. Public records laws may require disclosure of AI-generated government decisions, methodologies, and training data.

For a comprehensive treatment, see: AI Compliance Requirements for State and Local Government: What You Need to Know.

Kiteworks Compliant AI: One Architecture That Satisfies Every Sector’s Requirements

The four technical controls that satisfy the evidentiary standard across every sector in this guide are implementable now, in a single governance architecture, before your next AI deployment. 

Kiteworks Compliant AI delivers exactly that inside the Private Data Network:

  • Every AI agent authenticated with an identity linked to a human authorizer;
  • ABAC policy at the operation level satisfying CMMC least privilege, HIPAA minimum necessary, GLBA access restrictions, CJIS minimum necessary, and GDPR data minimization simultaneously;
  • FIPS 140-3 Level 1 validated encryption satisfying CMMC SC.3.177, HIPAA encryption standards, CJIS mandates, and GLBA/NYDFS requirements across all sectors;
  • A tamper-evident audit trail per interaction feeding your SIEM, satisfying CMMC audit requirements, HIPAA audit controls, CJIS audit standards, SR 11-7 monitoring obligations, GxP Part 11 requirements, GDPR Article 30 records, and state AI governance documentation requirements in a single continuous record.

Your sector determines which frameworks apply. Kiteworks determines that you can satisfy all of them. Contact us to see how Kiteworks maps to your industry’s AI compliance requirements.

Frequently Asked Questions

The frameworks that apply are determined by three factors: the data your AI systems access, the industry you operate in, and the jurisdictions in which you do business. Organizations handling CUI in defense contracting must satisfy CMMC and NIST 800-171. Organizations handling PHI must satisfy HIPAA. Organizations handling EU personal data must satisfy GDPR. Organizations in financial services must satisfy SR 11-7, GLBA, and potentially NYDFS Part 500 and PCI DSS. Law firms must satisfy ABA Model Rules professional responsibility obligations and attorney-client privilege protection requirements. Government agencies must satisfy CJIS, StateRAMP, HIPAA, and FERPA depending on the data they hold. Most enterprises in regulated industries are subject to multiple overlapping frameworks simultaneously — the right approach is to inventory what data your AI touches, then map that data to the applicable regulatory frameworks.

Conduct a controlled data inventory before any AI deployment. Identify every category of regulated data — PHI, CUI, PII, cardholder data, privileged communications, education records, CJI — that a proposed AI tool can reach. That inventory determines which compliance frameworks apply, what technical controls are required, and what vendor assessment steps must be completed before deployment. AI deployed without this foundational step almost always creates compliance exposure in categories the deploying organization did not identify or address. Every AI compliance failure pattern in this guide — HIPAA violations from missing BAAs, CMMC gaps from AI accessing CUI without proper controls, privilege waivers from commercial AI tools routing privileged content externally — traces back to AI deployed before the data inventory was done.

No. Your AI vendor’s SOC2, ISO 27001, or sector-specific certifications attest to the vendor’s own security posture — how they protect their infrastructure, manage internal access, and respond to incidents. They do not produce the compliance evidence your organization is required to generate: operation-level access logs for your AI agent’s interactions with your regulated data, encryption validation for your data in your environment, and audit records attributing your AI’s actions to your human authorizers. Your organization is the data controller or regulated entity; the compliance evidence obligation belongs to you, not to your vendor. No vendor certification transfers that obligation or satisfies the evidentiary standard your auditors and regulators will apply to your AI deployments.

AI data governance is the organizational framework — policies, accountability structures, risk management processes, and oversight mechanisms — that defines how your organization deploys and oversees AI systems. AI compliance is the evidentiary requirement — the specific, demonstrable controls that satisfy a regulator, auditor, or data subject for a defined legal obligation. You need both, but they serve different purposes. Governance without compliance produces policy documents that fail audits. Compliance without governance produces point-in-time evidence that is not sustainable as AI deployment scales. The organizations that manage AI risk most effectively build governance programs that continuously produce compliance evidence — rather than treating compliance as a periodic assessment exercise separate from ongoing operations. The four technical controls described in this guide — authenticated access, ABAC policy, FIPS encryption, and tamper-evident audit trails — are the infrastructure where governance and compliance converge.

Build governance into the data access architecture, not into review processes that sit outside it. The most common AI data governance failure at scale is the manual review gate: a compliance team reviewing AI outputs before they reach regulated workflows, a process that was never designed to scale with AI deployment velocity. As AI deployment accelerates, manual review gates become bottlenecks and are bypassed. Governance built into the data layer — enforced by the infrastructure before any AI agent interacts with regulated data — scales with deployment because it is not dependent on human review volume. Every new AI deployment that routes through the governed data access layer inherits the compliance controls automatically. The investment in data-layer governance infrastructure — authenticated access, operation-level policy enforcement, encryption, audit logging — pays compounding returns as AI deployment scales, rather than creating compounding compliance debt.

Additional Resources

Get started.

It’s easy to start ensuring regulatory compliance and effectively managing risk with Kiteworks. Join the thousands of organizations who are confident in how they exchange private data between people, machines, and systems. Get started today.

Table of Content
Share
Tweet
Share
Explore Kiteworks