Regulators Are Done Asking Whether You Have an AI Policy. They Want Proof It Works.
The International AI Safety Report 2026 is framed as a scientific assessment to support policymaking. It examines what general-purpose AI can do today, what emerging risks it poses, and what risk management approaches exist. It is not a regulation. It is not a directive. It is something potentially more consequential — the evidence base that regulators across multiple jurisdictions will use to decide what to require next.
Read through a business crime lens — as Hogan Lovells did in a recent analysis — the report points to three practical problems that every organization needs to confront. It is easier than ever to defraud a corporate. It is harder than ever to check whether something is real. And it is unclear who can be held accountable when the harm occurs.
That combination — lower barriers to fraud, degraded trust in communications, and uncertain accountability — creates a regulatory and compliance environment where organizations cannot afford to treat AI governance as a policy exercise. Regulators are moving beyond asking whether governance frameworks exist. They are beginning to ask whether those frameworks are enforced, whether controls are tested, and whether organizations can produce evidence that their AI systems are not processing data they should not be processing or making decisions they should not be making.
The report’s own assessment of current risk management is direct: current measures do not reliably prevent harm, and evidence of effectiveness in real-world conditions remains limited. For organizations deploying AI in high-risk areas, the message is clear. The governance gap between policy and proof is where regulatory exposure lives.
5 Key Takeaways
- AI Has Made Fraud Cheaper, Faster, and Harder to Attribute Than at Any Point in History. The International AI Safety Report 2026 concludes that general-purpose AI is making fraud, impersonation, and cyber compromise cheaper, faster, and harder to attribute. Current safeguards do not reliably prevent harm. Research cited in the report suggests listeners mistake AI-generated voices for real speakers 80% of the time. Cloned voices have already been used to persuade victims to transfer money by exploiting trust-based approval processes. The barrier to entry for business fraud has effectively collapsed.
- Synthetic Content Is Breaking Trust-Based Controls That Organizations Depend On. Voice, video, and email can no longer be trusted for high-risk actions. A credible synthetic “executive” request to move funds, change supplier bank details, override approval steps, reset credentials, or share sensitive information requires almost no technical skill to produce. The report notes that technical fixes like watermarks and labels help, but skilled actors can often remove them, and identifying where deepfakes originate is difficult. The controls most organizations rely on — trust in the voice on the phone, the face on the video call, the name in the email — are now exploitable at scale.
- Regulators Want Documented Governance Frameworks — Not Just AI Policies on Paper. The report dedicates over half its length to risk management practices and signals that regulators will increasingly expect organizations to manage AI risks through documented governance frameworks, risk assessments, and controls that explicitly address data integrity and misuse. Governance initiatives are proliferating — the EU AI Act, the G7 Hiroshima Process, developer safety frameworks — but the report’s own assessment is cautious: evidence on real-world effectiveness of most risk management measures remains limited. The gap between having a governance document and proving that governance is operational is where regulatory exposure lives.
- AI Creates Business Crime Risk From the Inside, Not Just the Outside. The business crime risk is not limited to external attackers. Employees, agents, and other associated persons can use AI to generate credible fakes, fabricate supporting documents, and muddy audit trails. In the UK, this intersects directly with the failure to prevent fraud offence, which has been in force since September 2025 for large organizations. AI-enabled methods — fake approval requests, fabricated documentation, high-volume synthetic communications — must now be addressed in fraud risk assessments, training programs, and prevention procedures.
- Defense in Depth Is Not Optional — It Is the Report’s Central Recommendation. The report stresses that no single safeguard is reliable against AI-enabled threats. It recommends defense in depth: multiple layers of independent safeguards so that if one fails, others still prevent harm. For organizations, this means dual approvals, call-backs via known numbers, friction for first-time or changed payees, out-of-band verification for high-risk actions, and AI incident response rehearsals that test contested authenticity, rapid shut-down decisions, and evidence preservation. Resilience, not prevention, is the operating model.
Synthetic Content Has Broken the Trust Model That Fraud Controls Depend On
The report highlights harmful incidents involving AI-generated content, particularly audio and video impersonation. It cites research suggesting listeners mistake AI-generated voices for real speakers 80% of the time. It describes cases where cloned voices were used to exploit trust-based approval processes and persuade victims to transfer funds.
For businesses, the implications are immediate and specific. A voice call from a CFO authorizing an urgent wire transfer. A video conference with a supplier confirming changed bank details. An email from a board member requesting sensitive documents. Each of these scenarios has been exploited using synthetic content. Each relied on the target’s trust in the apparent identity of the person making the request.
The report notes the limits of technical countermeasures. Watermarks and labels can help, but skilled actors can remove them. Identifying where deepfakes originate is difficult. Detection tools exist but are not reliable enough to serve as a primary defense. The practical implication: organizations cannot rely on detecting synthetic content after it arrives. They need controls that assume communications may not be authentic and require independent verification before high-risk actions are executed.
This means treating voice, video, and email as untrusted channels for payments, supplier changes, credential resets, and urgent approvals. It means requiring out-of-band verification through a separate, pre-established channel. It means building friction into processes that attackers rely on speed to exploit. The trust model that most organizations’ fraud controls are built on — that a recognized voice or face is sufficient authorization — is no longer viable.
You Trust Your Organization is Secure. But Can You Verify It?
Cyber Operations Are Being Commoditized — Even Without Full Autonomy
The report devotes substantial attention to AI use in cyberattacks, noting that developers increasingly report misuse of their systems in cyber operations and that illicit marketplaces sell easy-to-use tools that lower attacker skill requirements.
The report is careful about overclaiming. Fully autonomous end-to-end cyberattacks have not been confirmed. It is difficult to verify whether real-world incident levels have increased specifically because of AI. But the practical point is more important than the theoretical one: blended attacks are becoming easier. Synthetic content to gain initial access or authorization, paired with AI-assisted exploitation and persistence. The attacker does not need full autonomy. They need enough automation to make each stage faster and cheaper.
The report offers a measured note of optimism: it remains an open question whether future capability improvements will benefit attackers or defenders more. But that advantage will only materialize where organizations deploy AI effectively in security and fraud detection. The organizations that invest in AI-powered defense will be better positioned than those that face AI-powered attacks with traditional tools. The organizations that wait will discover the asymmetry has widened against them.
For compliance teams, the cyber operations discussion reinforces a specific point: AI systems processing large volumes of organizational data create novel compliance challenges. When AI tools are used in security operations — monitoring network traffic, analyzing logs, detecting anomalies — they are processing data that may include personally identifiable information, protected health information, and regulated financial data. The governance framework for AI must account for these data flows, not just the AI’s decision-making outputs.
The Business Crime Risk Coming From Inside the Organization
The report’s most underappreciated finding for corporate audiences is this: AI creates business crime risk from the inside, not just the outside. Employees, agents, and other associated persons can use general-purpose AI to generate credible fake documents, fabricate approval requests, create synthetic communications, and obscure audit trails.
In the UK, this intersects directly with the failure to prevent fraud offence, which came into force on 1 September 2025 for large organizations. The offence focuses on whether an organization had reasonable prevention procedures when an associated person commits specified fraud offences intending to benefit the organization or its clients. AI-enabled methods — fake approval requests, fabricated supporting documents, high-volume synthetic communications — should now be explicitly addressed in any fraud risk assessment, training program, and prevention procedure.
The report’s discussion of automation bias adds a separate dimension. When teams defer to AI-assisted outputs even when they are wrong, organizations risk making decisions and statements that are harder to evidence and defend. This includes disclosures to regulators, contractual representations, and dealings with counterparties. The audit trail for an AI-assisted decision may not capture the reasoning, the data inputs, or the degree to which human judgment was actually applied versus simply ratified. When a regulator or counterparty later asks how a decision was reached, the absence of that documentation creates exposure.
Organizations need audit infrastructure that captures not just what AI systems did, but what data they accessed, what outputs they produced, and what actions humans took based on those outputs. Without this level of documentation, the distinction between a decision made by a human using AI assistance and a decision made by AI with human rubber-stamping becomes impossible to demonstrate.
AI Governance Requires Data Governance — and Audit Trails to Prove It
The report’s risk management analysis leads to a conclusion that organizations deploying AI cannot avoid: AI governance frameworks are only as credible as the data governance infrastructure behind them.
When a regulator asks how an organization knows its AI is not processing data it should not be, the answer cannot be a policy document. It must be an audit trail showing what data the AI accessed, when, under what authorization, and what actions were taken. When a regulator asks whether AI systems comply with data protection obligations, the answer must include evidence of policy enforcement — not just evidence that policies exist.
This requirement intersects with multiple regulatory frameworks simultaneously. Under GDPR, AI systems that process personal data must comply with Article 22 provisions on automated decision-making, Article 28 processor obligations, Article 30 Records of Processing Activities, and Articles 44 through 50 on cross-border transfers. Under HIPAA, AI systems accessing protected health information must operate within the Security Rule’s requirements for information system activity review. Under anti-money laundering and sanctions regimes, AI systems used for financial crime detection must demonstrate they operate within approved parameters.
The operational infrastructure to meet these requirements includes comprehensive audit trails that log every AI interaction with enterprise data — timestamps, user IDs, data classifications, actions taken. It includes data classification systems that enforce access policies automatically, so that AI agents cannot access data categories they are not authorized to process. It includes anomaly detection that flags when AI systems exhibit unusual data access patterns, and automated alerting with escalation paths that ensure incidents reach the right people.
Without this infrastructure, AI governance frameworks remain aspirational. With it, organizations can demonstrate to regulators that controls are enforced, that data flows are documented, and that governance is operational — not just structural. The Kiteworks AI Data Gateway and Private Data Network provide exactly this layer: governing what data AI systems can access, logging every interaction, and generating the regulator-ready evidence that policy documents alone cannot.
Defense in Depth: The Report’s Operating Model for AI Risk
The report’s central recommendation is defense in depth — multiple layers of safeguards so that if one fails, others still prevent harm. This is not a new concept in security architecture, but the report applies it specifically to AI risk in a way that has direct implications for how organizations structure their controls.
For fraud prevention, defense in depth means independent verification at multiple stages. Dual approvals for high-value transactions. Call-backs through pre-established numbers, not numbers provided in the suspicious communication. Friction for first-time or changed payees that forces additional verification steps. Time delays that prevent attackers from exploiting urgency. Each layer operates independently, so compromising one does not compromise the chain.
For data governance, defense in depth means controls at the data layer, the application layer, and the network layer. Data classification that restricts what AI systems can access. Access controls that enforce least-privilege principles with continuous verification. Audit trails that document every interaction. Anomaly detection that identifies deviations from established patterns. And incident response procedures that have been rehearsed, not just documented.
The report also emphasizes building societal resilience — the ability of systems to resist, absorb, recover from, and adapt to shocks and harms. For organizations, this translates to incident response readiness. Who leads the response when synthetic content is used to execute fraud? Who has authority to pause or shut down affected systems? How will the organization communicate externally about the incident? How will evidence be preserved for regulatory and legal proceedings? These questions cannot be answered during the incident. They must be answered before it.
What Organizations Should Do Now
The International AI Safety Report 2026 is aimed at policymakers, but its corporate implications are immediate. Here is what the report’s findings demand of organizations deploying or exposed to AI.
Treat voice, video, and email as untrusted for high-risk actions. Payments, supplier changes, credential resets, and urgent approvals must require out-of-band verification through a separate, pre-established channel. The trust model that relies on recognizing a voice or face is broken. Build verification procedures that assume communications may be synthetic.
Build defense in depth with independent, layered safeguards. Dual approvals, call-backs via known numbers, friction for first-time or changed payees, time delays for high-value transactions. Each layer should operate independently so that compromising one does not compromise the others. Extend this approach to data governance: classification-based access controls, continuous verification, comprehensive audit trails, and anomaly detection.
Update fraud risk assessments and prevention procedures to cover AI-enabled methods. Synthetic media, document fabrication, insider misuse of AI tools, and automation bias must be explicitly addressed. For UK organizations, this is a legal requirement under the failure to prevent fraud offence. Extend the assessment to key third parties and vendors.
Build AI governance frameworks backed by operational audit infrastructure. Policy documents alone do not satisfy emerging regulatory expectations. Organizations need comprehensive audit trails that log what data AI systems access, when, under what authorization, and what actions are taken. Data classification must enforce access policies automatically. Anomaly detection must flag unauthorized patterns. Compliance reporting must be exportable and regulator-ready.
Run AI incident response scenarios before you need them. Rehearse contested authenticity — what happens when you cannot determine whether a communication is real. Rehearse rapid shut-down decisions — who has authority to pause AI systems and under what conditions. Rehearse evidence preservation — how you will document the incident for regulatory and legal proceedings. The time to answer these questions is before the incident occurs.
Pressure-test accountability in AI vendor contracts. The report highlights uncertainty over liability allocation because harms can be difficult to trace to specific design choices and responsibilities are distributed across multiple actors. Assume remediation from AI vendors may be slow or impossible. Contracts should specify data processing obligations, breach notification requirements, audit rights, and termination triggers. Governance frameworks should include third-party AI oversight with the same rigor applied to internal systems.
The Barrier to Fraud Has Collapsed. The Barrier to Compliance Has Not.
The International AI Safety Report 2026 delivers a message that is uncomfortable in its clarity: general-purpose AI means the barrier to entry for fraud and deception is lower than ever before. Synthetic content is breaking the trust-based controls that organizations have relied on for decades. Cyber operations are being commoditized. Insider misuse is becoming harder to detect. And current safeguards do not reliably prevent harm.
At the same time, regulatory expectations are rising. Governance frameworks must be documented. Risk assessments must address AI-specific threats. Controls must be tested and evidenced. The intersection of AI risk with data protection, financial crime, and corporate liability regimes means that AI governance is not a standalone exercise. It is a compliance obligation that touches every framework the organization already operates under.
The organizations that treat this report as a signal — building operational governance infrastructure, deploying defense in depth, creating audit trails that prove controls work, and rehearsing incident response before it is needed — will be positioned to demonstrate due diligence when regulators ask the questions the report is designed to prompt.
The organizations that treat it as another policy-writing exercise will discover that the gap between their governance documents and their operational reality is exactly where enforcement actions find their footing.
To learn how Kiteworks can help, schedule a custom demo today.
Frequently Asked Questions
AI voice cloning requires only a short audio sample — often scraped from public sources like earnings calls, conference recordings, or LinkedIn videos — to generate a synthetic voice convincing enough that research cited in the International AI Safety Report 2026 found listeners mistake AI-generated voices for real speakers 80% of the time. Attackers use this to impersonate executives, suppliers, or board members in calls or video conferences requesting urgent wire transfers, supplier bank detail changes, or credential resets. The attack exploits both the trust model (the target believes they recognize the voice) and the urgency model (speed prevents verification). Effective defenses don’t attempt to detect the synthetic content — they assume any unverified communication could be synthetic. For high-risk actions, organizations need out-of-band verification through a pre-established channel, dual approvals that require independent authorization, and friction controls — time delays, call-backs to known numbers, step-up authentication — that break the urgency the attack depends on.
The failure to prevent fraud offence, in force since 1 September 2025 for large organizations, creates liability when an associated person — employee, agent, contractor, or subsidiary — commits a specified fraud offence intending to benefit the organization or its clients, and the organization did not have reasonable prevention procedures in place. In the AI era, “reasonable prevention procedures” must explicitly address AI-enabled methods: synthetic media used to fabricate executive communications, AI-generated fake documentation supporting fraudulent transactions, high-volume automated communications designed to overwhelm approval processes, and insider use of AI tools to obscure audit trails. A fraud risk assessment that does not address these vectors — and training programs that do not cover them — will not satisfy the “reasonable” standard. Organizations must also extend the assessment to third parties who act on their behalf, since associated person liability extends beyond direct employees.
Automation bias is the documented tendency to defer to AI-generated outputs without sufficient critical scrutiny — accepting AI recommendations as correct rather than using them as one input among several. In regulated environments, this creates a specific accountability gap: when a decision is challenged by a regulator or counterparty, the organization must demonstrate that the decision reflected genuine human judgment, not AI rubber-stamping. An audit trail that records only the AI’s output and the final decision — without capturing the data inputs the AI accessed, the human review steps taken, and the reasoning applied — cannot make that demonstration. Under GDPR Article 22, decisions with legal or significant effects that involve automated processing must meet specific requirements including human oversight. Under HIPAA and financial crime frameworks, information system activity review requires evidence of operational controls, not just procedural commitments. Organizations need audit infrastructure that captures the full decision chain — what data the AI accessed, what output it produced, what human review occurred, and what action was taken — to close the accountability gap automation bias creates.
AI systems that process personal data are subject to at least four GDPR obligations that create direct audit trail requirements. Article 22 requires that decisions based solely on automated processing with significant effects on individuals include human oversight, with documentation that this oversight genuinely occurred. Article 28 requires processor agreements with AI vendors specifying data processing purposes, security requirements, and sub-processor controls — the audit trail must confirm these boundaries are respected in practice. Article 30 requires Records of Processing Activities documenting what personal data AI systems access, for what purpose, and how long it is retained. Articles 44 through 50 govern cross-border transfers, requiring evidence that personal data processed by AI systems doesn’t flow to jurisdictions without adequate protections. A Data Protection Impact Assessment is required for AI processing that is likely to result in high risk. Together, these obligations require audit infrastructure that logs what personal data each AI system accesses, when, under what authorization, and where it flows — not just what decisions the AI made.
The International AI Safety Report 2026 highlights a structural accountability problem: AI harms are often difficult to trace to specific design choices, and responsibility is distributed across developers, deployers, and users. This creates contractual exposure for organizations that assume vendor remediation will be available, timely, and sufficient. AI vendor contracts should specify four things the report’s findings make essential. First, data processing obligations: exactly what data the AI can access, for what purpose, under what security controls, and whether it can be used to train downstream models. Second, breach notification: specific timelines and escalation paths when the AI system processes data outside authorized boundaries or produces outputs that create legal exposure. Third, audit rights: the organization’s right to review logs of what data the AI accessed and what actions it took — without which compliance with GDPR, HIPAA, and financial crime frameworks cannot be demonstrated. Fourth, termination triggers: defined conditions — regulatory non-compliance, security incidents, model behavior deviations — under which the organization can exit the contract without penalty. Third-party risk management frameworks that apply to traditional vendors must extend with the same rigor to AI systems.
Additional Resources
- Blog Post Zero Trust Architecture: Never Trust, Always Verify
- Video Microsoft GCC High: Disadvantages Driving Defense Contractors Toward Smarter Advantages
- Blog Post How to Secure Classified Data Once DSPM Flags It
- Blog Post Building Trust in Generative AI with a Zero Trust Approach
- Video The Definitive Guide to Secure Sensitive Data Storage for IT Leaders