Canada ITSG-33 and AI: Meeting CSE’s Security Control Framework in Agentic Environments

Canadian federal agencies, their contractors, and private sector organizations that handle government-classified information are deploying AI agents across document processing, citizen service workflows, regulatory review, and program administration.

Many of these workflows involve Protected A and Protected B information — Canada’s classifications for sensitive government data whose unauthorized disclosure could cause serious injury to individuals, organizations, or government operations.

That places AI agent deployments squarely within the scope of ITSG-33, the Information Technology Security Guidance framework published by the Canadian Centre for Cyber Security.

ITSG-33 is Canada’s IT security risk management framework, closely aligned with NIST 800-53 and serving as the foundation for cloud security assessments, Contract Security Program requirements, and FedRAMP-equivalent cloud authorization in the Canadian federal context. 

Like its U.S. counterparts, ITSG-33 does not provide an exemption for AI agents or automated systems accessing protected information. The access controls, audit requirements, encryption obligations, and incident response requirements that protect human employee access to Protected B data apply equally to AI agent access.

This post explains what ITSG-33 requires for AI-enabled workflows handling protected government information, identifies the compliance gaps that agentic deployments create in the Canadian context, outlines best practices for governing AI agent access to Protected A and B data, and makes the case for data-layer governance as the architecture that satisfies ITSG-33’s security control requirements for agentic systems.

Executive Summary

Main Idea: ITSG-33’s security controls — access control, audit and accountability, identification and authentication, system and communications protection, and incident response — apply to AI agents accessing Protected A and Protected B information. Canadian government agencies and contractors that have deployed AI against protected information workflows without updating their security control implementation to cover AI agents are operating with compliance gaps that the Treasury Board Secretariat and the Canadian Centre for Cyber Security will increasingly examine.

Why You Should Care: Organizations handling Protected B data face disqualification from federal procurement opportunities, mandatory breach notification under PIPEDA, and penalties up to C$10 million under Quebec’s Law 25 for non-compliant data handling. The Contract Security Program screens organizations and their security posture before granting access to Protected B data — and AI deployments that lack the data governance controls ITSG-33 requires are a compliance liability that extends to every government contract the organization holds.

Key Takeaways

  1. ITSG-33’s security controls apply to AI agents accessing Protected A and B information without exception. The framework governs access to classified government information regardless of whether the accessor is human or automated. An AI agent that reads, processes, or transmits Protected B data is subject to the same access control, audit, and encryption requirements as a human employee performing the same function.
  2. Protected B data must remain within Canada’s jurisdiction — including when processed by AI systems. Cloud services used to process Protected B data must be assessed by the Canadian Centre for Cyber Security against the Protected B/Medium Integrity/Medium Availability (PBMM) profile. AI inference pipelines that route Protected B data through non-assessed cloud infrastructure, or through infrastructure without Canadian data residency guarantees, create a data sovereignty gap that ITSG-33 does not accommodate.
  3. Audit requirements under ITSG-33 demand operation-level logs for Protected B access events. The framework’s AU (Audit and Accountability) control family requires that access to protected information be recorded at the operation level — what was accessed, by whom, under what authorization, and when. AI agents operating through shared service accounts produce logs that satisfy none of these requirements. The audit trail must link every Protected B access event to a specific authorized individual whose identity can be verified.
  4. ITSG-33’s access control requirements extend to AI agent workflows at the operation level. The AC control family requires that access to protected information be limited to authorized users and processes, with role-based access controls enforcing least-privilege principles. For AI agents, this means access must be scoped per-operation to only the Protected B data the specific task requires — not granted at the session level through broad service account credentials.
  5. The Contract Security Program assessment of your AI deployment is a procurement eligibility risk. Organizations seeking federal contracts that involve Protected B data must demonstrate that their information systems — including AI systems — satisfy the security controls required for that classification level. An AI deployment that cannot demonstrate ITSG-33-compliant access controls, audit trails, and data residency for Protected B data is a risk to contract eligibility, not just a technical compliance gap.

What ITSG-33 Requires for AI-Enabled Workflows

ITSG-33 provides a catalog of security controls organized into technical, operational, and management classes, aligned with NIST SP 800-53. For Protected B/PBMM compliance, four control families are most directly implicated by AI agent deployments: Access Control (AC), Audit and Accountability (AU), Identification and Authentication (IA), and System and Communications Protection (SC). Each maps directly to capabilities that most AI deployments currently do not provide.

Access Control (AC Family)

ITSG-33’s AC controls require that access to Protected B information be limited to authorized users and processes, with least-privilege principles enforced. For AI agents, this means two things. First, the agent must have an authenticated identity that can be verified before any protected information access occurs. Second, access must be scoped to the minimum necessary for the specific task — an agent authorized to read a protected document folder is not automatically authorized to download all files, transmit data externally, or access adjacent protected information categories. Attribute-based access control evaluated at the operation level is the mechanism that satisfies this requirement for agentic systems.

Audit and Accountability (AU Family)

The AU control family requires comprehensive audit logging of all access to Protected B data, with logs protected from tampering. The audit record must capture who accessed protected information, what was accessed, what operation was performed, and when — in a format that supports compliance audits and forensic investigation. AI agents operating through shared service accounts produce infrastructure-level logs that record API calls without the operation-level detail, individual attribution, or tamper-evident protection that the AU family requires. Standard AI inference logs are not ITSG-33-compliant audit records for Protected B access events.

Identification and Authentication (IA Family)

ITSG-33’s IA controls require that users and processes be uniquely identified and authenticated before accessing protected information. Multi-factor authentication is required for access to Protected B systems. For AI agents, unique identification means each agent must have a distinct identity credential — not a shared service account — tied to the specific workflow and the human authorizer who delegated it. When multiple agents share credentials, or when the authentication record cannot trace the access event to a specific human decision-maker, the IA controls cannot be demonstrated as satisfied.

System and Communications Protection (SC Family)

The SC controls require that Protected B data be encrypted using AES-256 in transit and at rest. For AI agent deployments, this means every component of the inference pipeline — API calls, model inference environments, vector databases, temporary agent storage, and output delivery channels — must encrypt Protected B data with validated cryptographic implementations. The confidentiality, integrity, and availability of Protected B data must be maintained across every data path the agent touches, not just at the primary application layer.

Data Residency for Protected B: The Cloud Assessment Requirement

One of the most operationally significant ITSG-33 requirements for AI deployments is the cloud service assessment obligation. Cloud services used to process Protected B workloads must be assessed by the Canadian Centre for Cyber Security against the PBMM profile.

This assessment verifies that the cloud infrastructure meets the security control requirements for Protected B data — including data residency within Canada. AI agents that process Protected B data through non-PBMM-assessed cloud services, or through services without documented Canadian data residency, are operating outside the permissible infrastructure for that classification level.

General-purpose commercial cloud regions — including Canadian regions of major hyperscalers that have not been specifically assessed for PBMM — do not automatically satisfy this requirement.

What Data Compliance Standards Matter?

Read Now

Where AI Deployments Create ITSG-33 Compliance Gaps

The compliance gaps AI agent deployments introduce into ITSG-33-governed environments are structurally similar to those seen in other regulatory frameworks, with one additional dimension: Canada’s data residency and PBMM cloud assessment requirement creates infrastructure-level exposure that cannot be addressed through application-layer configuration alone.

Non-Assessed Cloud Infrastructure for Protected B AI Workloads

The most common ITSG-33 gap in Canadian AI deployments is the use of cloud infrastructure not assessed against the PBMM profile. Major AI platforms — commercial LLM providers, AI orchestration services, and vector database vendors — typically operate on multi-region cloud infrastructure without CSE PBMM assessment for the Canadian government context. Organizations deploying these platforms against Protected B workflows are processing government-classified information on infrastructure outside the permissible boundary, regardless of application-layer access control configuration.

Shared Service Accounts and Missing Individual Attribution

ITSG-33 cannot be satisfied by AI agent deployments using shared service account credentials. When multiple AI agents share a service account, the audit log cannot attribute a specific Protected B access event to a specific authorized individual. The Contract Security Program requires that organizations demonstrate who accessed protected information — an audit trail naming a service account rather than a person does not satisfy this requirement for Protected B data.

Inadequate Encryption Coverage Across AI Inference Pipelines

ITSG-33’s SC controls require AES-256 encryption for Protected B data in transit and at rest. AI inference pipelines include multiple transit and storage points: API calls to the model, model inference environments, vector databases, and output delivery pathways. Organizations that have confirmed encryption at the primary application layer may not have verified coverage at every pipeline point. Each unencrypted segment is an SC control gap for any Protected B data that passes through it.

Best Practices for ITSG-33-Compliant AI Agent Access to Protected Information

1. Use PBMM-Assessed Cloud Infrastructure for Protected B AI Workloads

Any AI inference pipeline processing Protected B data must run on cloud infrastructure assessed by the Canadian Centre for Cyber Security against the PBMM profile, with documented Canadian data residency. Organizations should request specific PBMM assessment documentation from AI vendors and cloud providers before deploying any Protected B workflow — general FedRAMP Moderate authorization does not substitute for CSE PBMM assessment.

2. Assign Unique Identity Credentials to Every AI Agent Workflow

Every AI agent accessing Protected B information must operate under a unique identity credential provisioned at the workflow level and linked to the specific human authorizer who delegated the task. Shared service accounts and pooled API keys do not satisfy ITSG-33’s IA control requirements. The authentication event and delegation chain must be captured in every audit record, providing the individual attribution the Contract Security Program and ITSG-33 AU controls require.

3. Enforce Operation-Level Access Controls Using ABAC

Implement ABAC that evaluates each AI agent Protected B data request against the agent’s authenticated profile, the classification level of the requested data, the workflow context, and the specific operation. Least-privilege enforcement at the operation level means an agent authorized to read a Protected B document cannot automatically download it, forward it externally, or access records beyond the specific task scope.

4. Implement Tamper-Evident Audit Logging for All Protected B Agent Access Events

Deploy operation-level audit logging for every AI agent Protected B interaction: authenticated agent identity, human authorizer, specific data accessed, operation performed, policy evaluation outcome, and timestamp. Logs must be tamper-evident, retained per Treasury Board records management policy, and feed into the organization’s SIEM for real-time anomaly detection.

5. Update Incident Response Plans to Address AI-Related Protected B Incidents

ITSG-33 requires incident response plans capable of addressing all relevant cybersecurity event types. AI deployments introduce new Protected B incident categories: unauthorized agent access, prompt injection causing data exfiltration, model compromise, and vendor-side incidents affecting Canadian data residency. Each requires defined detection criteria, containment procedures, and notification obligations under PIPEDA and Treasury Board breach notification requirements.

How Kiteworks Supports ITSG-33 Compliance for AI Agent Deployments

Governing AI agent access to Protected B information under ITSG-33 requires a platform that enforces the security controls the framework demands — at the data layer, not the model layer, and within Canadian data residency boundaries. The Kiteworks Private Data Network provides Canadian government agencies and their contractors with a governance architecture that intercepts every AI agent interaction with protected government information before access occurs, enforcing authenticated identity, ABAC policy, FIPS 140-3 Level 1 validated encryption, and tamper-evident audit logging for every operation.

Unique Agent Identity and Delegation Chain for ITSG-33 IA and AU Controls

Kiteworks authenticates every AI agent before any Protected B access occurs, using a unique per-workflow credential linked to the human authorizer who delegated the task. The complete delegation chain — authorizer identity, agent identity, Protected B data accessed, operation performed — is preserved in every audit log entry. This satisfies ITSG-33’s IA control requirements for individual attribution and provides the AU-family audit record that the Contract Security Program assessment requires: a tamper-evident log linking every protected information access event to a specific authorized individual.

Operation-Level ABAC for ITSG-33 AC Controls and Least-Privilege Enforcement

Kiteworks’ data policy engine evaluates every agent Protected B data request against a multi-dimensional policy: the agent’s authenticated profile, the classification level of the requested data, the workflow context, and the specific operation. An agent authorized to read a Protected B record cannot download it, forward it externally, or access Protected B data beyond its authorized scope. This per-operation enforcement satisfies ITSG-33’s least-privilege AC controls for AI agent access — replacing the inadequate session-level service account credentialing that most current deployments rely on.

FIPS 140-3 Encryption and SIEM-Integrated Audit Trail

All Protected B data accessed through Kiteworks is protected by FIPS 140-3 validated encryption in transit and at rest, satisfying ITSG-33’s SC control requirements across every point in the agent data path. Every Protected B interaction is captured in a tamper-evident, operation-level audit log that feeds directly into the organization’s SIEM. When a ITSG-33 compliance assessment or breach notification assessment requires evidence of access controls for Protected B AI workflows, the evidence package is a report — not an investigation spanning multiple infrastructure logs.

Flexible Deployment Options Supporting Canadian Data Residency

Kiteworks offers on-premises, private cloud, and hybrid deployment configurations that keep Protected B data within Canadian borders — satisfying ITSG-33’s data residency requirements for Protected B cloud workloads. Organizations can deploy Kiteworks within Canadian government-approved infrastructure, ensuring that AI agent access to Protected B data remains within the CSE-assessed perimeter required for that classification level. Secure deployment options extend the same governance architecture to hybrid environments where protected information moves between on-premises repositories and cloud-hosted AI workflows.

For Canadian government agencies and contractors seeking to deploy AI agents against Protected B workflows without compromising their ITSG-33 compliance posture, Kiteworks provides the governance infrastructure that makes every AI agent interaction with protected government information defensible by design. Learn more about Kiteworks for government or request a demo.

Frequently Asked Questions

ITSG-33 applies to AI agents accessing Protected B information. The framework’s AC, AU, IA, and SC controls govern access to protected government information regardless of whether the accessor is a human employee or an automated process. An AI agent that reads, processes, or transmits Protected B data is subject to the same access control, audit logging, authentication, and encryption requirements as a human employee performing the same function. ITSG-33 compliance requires that organizations extend their security control implementation to cover AI agent workflows that touch Protected B data.

No. ITSG-33 requires that cloud services processing Protected B data be assessed by the Canadian Centre for Cyber Security against the PBMM profile. A vendor’s SOC 2 certification, ISO 27001 certification, or even FedRAMP Moderate authorization does not substitute for a CSE PBMM assessment in the Canadian federal government context. Organizations must request specific PBMM assessment documentation from cloud providers and AI vendors before deploying any Protected B workflow through their infrastructure. Data sovereignty for Protected B requires Canadian data residency confirmed through CSE assessment, not vendor attestations based on other frameworks.

ITSG-33’s AU control family requires that audit records for Protected B access events capture the authenticated identity of the accessor (or process), the specific data accessed, the operation performed, and a tamper-evident timestamp — at the operation level, not just the session or API-call level. For AI agents, this means every Protected B interaction must be logged with the agent’s unique identity credential, the human authorizer who delegated the workflow, the specific Protected B document or record accessed, and the operation type. Infrastructure logs and AI inference logs that capture only API calls or session events do not satisfy this requirement. Audit trail quality is the foundation of ITSG-33 compliance evidence.

ITSG-33 requires that cloud services processing Protected B data be assessed for the PBMM profile with Canadian data residency confirmed. This means AI platforms must not route Protected B data through infrastructure outside Canada or through cloud regions that have not been specifically assessed for PBMM compliance. When evaluating AI platforms, contractors should assess each component of the inference pipeline — model hosting, API gateway, vector database, output delivery — against the PBMM data residency requirement. Deployment options that keep all Protected B processing within Canadian-assessed infrastructure are the only architecture that satisfies this requirement for ITSG-33 Protected B compliance.

The Contract Security Program screens organizations and their security posture before granting access to Protected B data on government contracts. An AI deployment that cannot demonstrate ITSG-33-compliant access controls, audit trails, and data residency for Protected B data represents a security posture gap that can affect contract eligibility. Beyond procurement risk, inadequate protection of Protected B data creates breach notification obligations under PIPEDA and potential penalties under Quebec’s Law 25. Organizations should conduct a formal risk assessment of their AI deployments against ITSG-33’s PBMM control requirements before submitting for federal contracts involving Protected B data.

Additional Resources

Get started.

It’s easy to start ensuring regulatory compliance and effectively managing risk with Kiteworks. Join the thousands of organizations who are confident in how they exchange private data between people, machines, and systems. Get started today.

Table of Content
Share
Tweet
Share
Explore Kiteworks