AI Compliance Requirements for Financial Services Firms: What You Need to Know
Financial services firms are among the most aggressive early adopters of AI — and among the most exposed when it comes to compliance. The regulatory environment they operate in was built for human decision-making: loan officers reviewing applications, traders executing orders, advisors managing client portfolios. AI systems stepping into those roles do not step out of the regulatory frameworks that govern them.
SR 11-7 model risk guidance, GLBA, PCI DSS, NYDFS Part 500, DORA, and GDPR all apply simultaneously — with different evidentiary standards and different enforcement mechanisms. Getting AI governance right in financial services means satisfying all of them.
Executive Summary
Main idea: Financial services AI compliance is not a single framework problem — it is a multi-regulator, multi-jurisdiction challenge where SR 11-7 model risk, GLBA, NYDFS, PCI DSS, DORA, and GDPR must all be satisfied with consistent, data-layer governance infrastructure.
Why you should care: Financial services regulators — OCC, Federal Reserve, FDIC, SEC, FINRA, NYDFS, and EU counterparts — are actively examining AI governance. Firms that cannot produce operation-level evidence of AI access controls, model oversight, and audit trails during examination will face findings and remediation orders. The cost of reactive compliance after an examination finding is consistently higher than proactive governance before one.
Key Takeaways
- SR 11-7 model risk management applies to AI models influencing financial decisions — validation, ongoing monitoring, and human override documentation are requirements, not optional best practices.
- GLBA requires financial institutions to protect customer NPI against unauthorized access — AI agents accessing that information must satisfy the same safeguard requirements as human employees.
- NYDFS Part 500 (2023 amendments) explicitly requires covered financial institutions to include AI systems within their cybersecurity programs — making it the most operationally specific U.S. financial services regulation on AI governance.
- PCI DSS restricts AI agent access to cardholder data under the same need-to-know and unique identification requirements that apply to human users.
- For institutions serving EU markets, DORA’s ICT risk requirements and GDPR’s automated decision-making obligations create parallel compliance demands that must be satisfied alongside U.S. requirements.
The Financial Services AI Compliance Landscape
SR 11-7: Model Risk Management. The Federal Reserve and OCC’s SR 11-7 guidance is the foundational AI governance framework for U.S. banking and financial services. It requires that models — including AI and machine learning models — be subject to rigorous development, validation, and ongoing monitoring. For AI specifically, SR 11-7 requires: documented model purpose and assumptions; independent validation of model performance and limitations; continuous monitoring for drift, bias, and unexpected behavior; human override documentation with clear escalation processes; and defined model retirement criteria. AI models influencing credit decisions, fraud detection, trading, or customer risk scoring are squarely within scope. FINRA and the CFTC have issued parallel guidance for broker-dealers and derivatives market participants.
GLBA Safeguards Rule. GLBA requires financial institutions to protect the security and confidentiality of nonpublic personal information (NPI). The 2023 Safeguards Rule amendments added specific requirements for encryption, access controls, MFA, and audit log retention that apply to all systems handling NPI — including AI agents accessing customer account data, transaction histories, or credit profiles. The minimum necessary standard GLBA implies must be enforced at the operation level for AI agents, not just at the system or folder level.
NYDFS Part 500. The 2023 amendments are the most operationally specific U.S. financial services regulation addressing AI risk directly. NYDFS Part 500 requires covered entities to include AI systems in their cybersecurity programs, maintain access controls covering AI-accessible data, and produce audit evidence during examination. The annual certification requirement makes AI governance a board-level accountability issue, not purely a technical one.
PCI DSS. PCI DSS governs any system that stores, processes, or transmits cardholder data. For AI systems: unique identification for every AI agent in the cardholder data environment; minimum necessary access; continuous logging; and strong cryptography in transit and at rest. AI tools in payment processing, fraud detection, or customer service workflows that touch cardholder data are within PCI DSS scope without exception.
DORA. For EU-regulated financial institutions, DORA compliance requires ICT risk management that explicitly covers AI systems — risk classification, access controls, audit logs, resilience testing, and third-party AI provider assessment under DORA’s vendor risk framework.
GDPR. Financial services firms serving EU customers must satisfy GDPR Article 22 obligations for automated decision-making in credit scoring, fraud detection, and customer risk assessment — including lawful basis, transparency, and the right to human review — layered on top of DORA and national prudential requirements.
| Framework | AI Trigger | Key Requirement | Examined By |
|---|---|---|---|
| SR 11-7 | AI model influencing financial decisions | Validation, ongoing monitoring, human override documentation, defined escalation process | Federal Reserve, OCC, FDIC during safety and soundness examinations |
| GLBA Safeguards Rule | AI accessing or processing nonpublic personal information | Access controls, encryption, MFA, audit log retention for AI-NPI interactions | FTC, prudential regulators during information security program reviews |
| NYDFS Part 500 | AI system within covered entity’s cybersecurity program | AI included in asset inventory, access privilege controls, audit trail evidence | NYDFS during cybersecurity examinations; annual certification requirement |
| PCI DSS | AI accessing cardholder data environment | Unique AI agent identification, least-privilege access, continuous logging, strong encryption | QSA during PCI assessments; acquiring banks; card brands |
| DORA | AI system in EU-regulated financial entity’s ICT environment | ICT risk classification, access controls, audit logs, third-party AI provider assessment | National competent authorities in EU member states |
| GDPR Article 22 | Automated decisions with legal or significant effects on EU data subjects | Lawful basis, transparency, right to human review for credit, fraud, and risk scoring AI | EU supervisory authorities; DPAs in member states |
Where AI Creates the Most Significant Compliance Gaps in Financial Services
Model risk without model governance infrastructure. The most pervasive gap in financial services AI is deploying models that meet SR 11-7’s development and validation requirements but lack the ongoing monitoring and human override infrastructure the guidance also requires. SR 11-7 is explicit: validation is not a one-time gate — it is a continuous process. AI models in production must be monitored for performance drift, bias, and unexpected outputs; those monitoring results must be reviewed by qualified humans; and the override process — who can intervene, how, and when — must be documented and tested. Most financial services firms have stronger model development practices than model monitoring practices, and AI deployments are accelerating the gap.
AI agent access to customer financial data without operation-level controls. GLBA’s Safeguards Rule and NYDFS Part 500 require access controls that restrict who — and what AI — can reach nonpublic personal information. The specific failure mode: AI agents with broad access to customer data repositories, operating without the operation-level ABAC enforcement that restricts each agent to only the data its specific function requires. An AI model generating customer portfolio reports that can access all customer records in a database — not just the records relevant to its current task — is operating outside GLBA minimum necessary standards and NYDFS access privilege requirements simultaneously.
Audit trail gaps for AI-driven financial decisions. Regulators examining AI governance in financial services consistently ask for the same evidence: what did the AI access, when, under what authorization, and what decision did it influence? Most financial services firms cannot produce this evidence at the operation level for AI-driven interactions. Audit logs that capture session activity but not individual data interactions do not satisfy SR 11-7’s model monitoring requirements, NYDFS Part 500’s audit trail obligations, or GLBA’s logging standards. The tamper-evident, operation-level audit infrastructure required is the same across all three frameworks — and it must feed a SIEM for continuous monitoring rather than being available only retrospectively.
Third-party AI without third-party AI governance. Financial services firms are heavy consumers of third-party AI — embedded in trading platforms, wealth management tools, compliance monitoring systems, and customer service applications. DORA’s third-party risk management requirements, SR 11-7’s vendor model risk guidance, and GLBA’s Safeguards Rule all impose obligations on how third-party AI is selected, monitored, and governed. The specific gap: firms conducting vendor due diligence for cybersecurity posture but not for AI-specific governance — whether the vendor’s AI produces auditable outputs, satisfies encryption requirements at the data layer, and provides the access logging that examination requires.
What Data Compliance Standards Matter?
Emerging AI-Specific Guidance for Financial Services
SEC AI Disclosure and Governance. The SEC requires public companies — including financial services firms — to disclose material AI risks and the governance processes managing them. For asset managers and broker-dealers, SEC and FINRA have signaled examination focus on AI governance in investment recommendations, customer communications, and trading systems. AI governance infrastructure producing documented, auditable evidence of controls is the standard examiners apply.
OCC and Federal Reserve AI Guidance. U.S. banking regulators have published statements on responsible AI use emphasizing explainability, fairness, and governance documentation as examination expectations. OCC fair lending procedures now explicitly address AI-driven credit decision systems, requiring validation evidence for disparate impact and genuine — not nominal — human oversight mechanisms.
FINMA Circular 2023/1. FINMA‘s operational risk guidance explicitly addresses algorithmic and AI-driven decision systems in Swiss financial institutions, requiring governance documentation, ongoing monitoring, and senior management accountability. International firms with Swiss operations face FINMA requirements as an additional compliance layer.
EU AI Act — High-Risk Financial AI. The EU AI Act classifies AI used in credit scoring, insurance risk assessment, and certain investment advisory functions as high-risk — triggering conformity assessment, human oversight, and technical documentation requirements. For firms in EU markets, this adds a third framework on top of DORA and GDPR for high-risk financial AI deployments.
Building a Compliant AI Program for Financial Services
The underlying governance requirements converge on the same technical controls across SR 11-7, GLBA, NYDFS, PCI DSS, and DORA. A single data-layer governance architecture — authenticated access, operation-level access policy, validated encryption, and tamper-evident audit trails — satisfies the evidentiary standard across all of them.
Build AI into your model risk management program from day one. SR 11-7 applies to AI models as it does to statistical models. Every AI model influencing financial decisions must have a model inventory entry, a validation record, an ongoing monitoring plan with defined thresholds, and a documented human override process. AI deployed without these elements is not SR 11-7 compliant regardless of model sophistication.
Enforce operation-level access controls for AI agents. GLBA, NYDFS Part 500, and PCI DSS all require access controls restricting AI to the minimum necessary for each function. ABAC policy enforcement at the operation level — evaluated against the agent’s authenticated identity, the data’s classification, and the request context — satisfies these requirements simultaneously. Folder-level permissions are not sufficient.
Implement FIPS-validated encryption for AI-processed financial data. GLBA, NYDFS, and PCI DSS all require strong cryptography for financial data in transit and at rest. FIPS 140-3 Level 1 validated encryption satisfies examination requirements across all three frameworks. Verify that AI tools processing NPI or cardholder data provide this level — standard TLS is not sufficient.
Produce operation-level audit trails feeding your SIEM. SR 11-7 model monitoring, NYDFS audit trail requirements, GLBA logging standards, and DORA ICT monitoring all require the same evidence: what the AI accessed, when, under what authorization, and what it produced. Operation-level audit logs attributed to authenticated agents feeding continuously into your SIEM satisfies all four frameworks with a single investment.
Assess third-party AI under your vendor governance program. Every third-party AI platform your firm uses must be assessed for AI-specific governance — not just general cybersecurity posture. Verify FIPS encryption, operation-level audit logging, and the vendor’s own model risk practices. GRC programs that assess vendor security but not vendor AI governance are incomplete for financial services regulatory purposes.
Kiteworks Compliant AI: Built for the Financial Services Regulatory Environment
Financial services firms need AI governance that produces the specific evidence their regulators will examine — not general-purpose compliance tooling that approximates the standard. Kiteworks compliant AI delivers that evidence inside the Private Data Network, at the data layer, before any AI agent interaction with customer financial data occurs.
Every AI agent is authenticated with an identity linked to a human authorizer, satisfying SR 11-7’s accountability requirements and NYDFS Part 500’s access privilege controls.
ABAC policy enforces minimum necessary access at the operation level, satisfying GLBA Safeguards Rule, NYDFS, and PCI DSS access control requirements simultaneously.
FIPS 140-3 Level 1 validated encryption protects customer financial data in transit and at rest across all frameworks. A tamper-evident audit trail per interaction feeds your SIEM, satisfying SR 11-7 monitoring, NYDFS audit trail, GLBA logging, DORA ICT monitoring, and GDPR Article 30 records requirements in a single continuous record.
When your OCC examiner, NYDFS examiner, or PCI QSA asks how your firm governs AI access to customer financial data, the answer is an evidence package — not a policy document.
Contact us to see how Kiteworks supports AI compliance for financial services firms across your full regulatory stack.
Frequently Asked Questions
Yes. SR 11-7 defines a model broadly as “a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.” Machine learning and AI models that influence financial decisions — credit scoring, fraud detection, trading, customer risk assessment — fall squarely within this definition. The guidance’s requirements for development, validation, ongoing monitoring, and human override apply to AI models as rigorously as to traditional statistical models. Regulators have made clear in examination guidance and enforcement actions that SR 11-7’s requirements cannot be waived or weakened for AI models on the grounds that their complexity makes full validation impractical.
The 2023 amendments to NYDFS Part 500 require covered financial institutions to include AI systems within their cybersecurity programs as a matter of explicit regulatory expectation. Specific requirements include: maintaining a complete asset inventory that includes AI systems; implementing access privilege controls covering AI-accessible data; maintaining audit trails sufficient to detect and respond to cybersecurity events involving AI; and conducting periodic risk assessments that address AI-related risks. The regulation also requires annual certification of compliance by senior officers — making AI governance a board-level accountability issue rather than a purely technical one. NYDFS examiners are actively evaluating AI governance during examination cycles.
GLBA’s Safeguards Rule requires financial institutions to implement a comprehensive information security program that protects nonpublic personal information against unauthorized access and use. The 2023 Safeguards Rule amendments added specific requirements — encryption, access controls with defined authorization standards, multi-factor authentication, and audit log retention — that apply to all systems accessing NPI, including AI systems. An AI agent accessing customer account data, generating financial reports, or processing loan applications must satisfy these safeguard requirements. The minimum necessary standard that GLBA implies — access limited to what is required for the specific function — must be enforced at the operation level for AI agents, not just at the system or folder level.
Any AI system that stores, processes, or transmits cardholder data — or that has access to the cardholder data environment — is within PCI DSS scope. Specific requirements include: a unique identifier for every AI agent accessing the CDE; access restricted to the minimum necessary for business need; all access to cardholder data logged with sufficient detail to reconstruct the activity; and strong cryptography for cardholder data in transit and at rest. AI tools embedded in payment processing, fraud detection, or customer service workflows that touch cardholder data must be assessed as part of the PCI scoping exercise — their presence in the CDE is not automatically excluded by the fact that they are AI systems rather than human users.
For financial services firms with EU operations or EU customers, DORA and GDPR layer on top of — and do not replace — U.S. regulatory requirements. DORA’s ICT risk management and third-party risk requirements apply to AI systems in EU-regulated entities, requiring risk classification, access controls, audit trails, and vendor assessment that parallels U.S. SR 11-7 and GLBA requirements. GDPR Article 22 imposes additional obligations for automated decision-making affecting EU data subjects — credit scoring, fraud detection, and risk assessment AI must satisfy lawful basis, transparency, and human review requirements. The practical governance implication: a single data-layer governance architecture that enforces authenticated access, ABAC policy, FIPS encryption, and tamper-evident audit trails satisfies the evidentiary standard across both U.S. and EU frameworks simultaneously, reducing the compliance overhead of operating internationally.
Additional Resources
- Blog Post
Zero‑Trust Strategies for Affordable AI Privacy Protection - Blog Post
How 77% of Organizations Are Failing at AI Data Security - eBook
AI Governance Gap: Why 91% of Small Companies Are Playing Russian Roulette with Data Security in 2025 - Blog Post
There’s No “–dangerously-skip-permissions” for Your Data - Blog Post
Regulators Are Done Asking Whether You Have an AI Policy. They Want Proof It Works.