CJEU 2026 Ruling: Pseudonymization No Longer Exempts GDPR Compliance
On March 4, 2026, the French Conseil d’État upheld a €40 million CNIL fine against adtech company Criteo SA (NASDAQ: CRTO) for multiple GDPR violations tied to its behavioral advertising practices. Gibson Dunn’s April 2026 Europe data protection update situates that ruling within a broader enforcement wave and, critically, highlights the CJEU’s clarification that pseudonymized cookie identifiers tied to browsing data, IP addresses, and purchase histories remain personal data when re-identification is feasible without disproportionate effort.
Key Takeaways
- The CJEU closed the pseudonymization loophole for online identifiers. The Court of Justice of the European Union has clarified that pseudonymized cookie identifiers linked to browsing data, IP addresses, and purchase histories remain personal data under GDPR when re-identification is feasible without disproportionate effort. That classification drives every downstream compliance obligation.
- France is operationalizing the ruling through enforcement, not guidance. The French Conseil d'État upheld a €40 million CNIL fine against adtech company Criteo SA on March 4, 2026, closing a legal battle that began with the CNIL's 2023 sanction. France's GDPR enforcement posture on adtech is no longer theoretical.
- The AI training data implications are substantial. Organizations that classified cookie IDs, device identifiers, or behavioral fingerprints as "non-personal" and fed that data into model training now face retroactive exposure. The EDPB has already held that AI models trained on personal data cannot automatically be considered anonymous.
- Access request defensibility is a new regulatory battleground. The CJEU simultaneously clarified that controllers may refuse "abusive" GDPR access requests — but only with documented, defensible criteria. Blanket refusals in the face of mass-submitted automated requests will themselves become enforcement triggers.
- Data classification, not data anonymization, is the enforceable control. The response to the ruling is not to invent new pseudonymization schemes. It is to treat online identifiers as personal data inside data inventories, apply attribute-based access controls, maintain data residency within the EU, and produce tamper-evident audit trails that can defend processing decisions.
Read together, these two developments redraw the boundaries of GDPR compliance for any organization processing online identifiers at scale. And for any organization running adtech, behavioral analytics, AI training on customer data, or cross-border data flows involving EU residents, the consequences are now quantified in eight-figure fines.
The pseudonymization escape hatch — the compliance theory that “we don’t hold names, we hold hashed identifiers, so GDPR doesn’t apply” — just closed.
What the CJEU Actually Ruled and Why Classification Matters
The CJEU confirmed that the test for whether data is “personal” under GDPR Article 4(1) is not whether the controller holds directly identifying information, but whether re-identification is feasible using means reasonably likely to be used. Cookie identifiers linked to browsing behavior, IP addresses, and purchase data satisfy that test because the combination creates a uniquely identifiable profile even without names or email addresses attached. This position is consistent with earlier EDPB guidance and the September 2025 CJEU SRB case, which applied a relative approach to de-identification.
The operational impact is direct. Any dataset containing cookie IDs, device fingerprints, or other pseudonymous online identifiers combined with behavioral data is now unambiguously within GDPR scope. Every provision that applies to personal data applies to it: lawful basis requirements under Article 6, data minimization under Article 5(1)(c), data subject access rights under Articles 15 through 22, security safeguards under Article 32, and breach notification obligations under Articles 33 and 34. An organization that previously maintained those datasets outside its GDPR governance program is now retroactively noncompliant across all of those provisions simultaneously.
The ruling also clarifies how “reasonably likely” is interpreted. It is not a theoretical exercise asking whether re-identification is possible in principle. It asks whether the combination of data elements, the available matching techniques, and the economic incentives make re-identification a practical risk. For adtech ecosystems that exist precisely to identify and track users across sessions and sites, the answer is almost always yes. The architecture of behavioral advertising requires re-identifiability. That requirement now classifies the entire data pipeline as personal data processing.
France Is Converting the Ruling Into Enforcement at Scale
The CNIL’s €40 million fine against Criteo, upheld by the Conseil d’État, was not an outlier. The aggregate 2025 European GDPR fine tally exceeded €1.2 billion according to the DLA Piper GDPR Fines and Data Breach Survey, with cumulative fines since 2018 passing €5.88 billion and the 2026 edition documenting a 22% year-over-year increase in breach notifications.
The enforcement pattern focuses heavily on Articles 5(1)(a) — lawfulness, fairness, and transparency — and 5(1)(f) — integrity and confidentiality. Both articles depend on the classification of data as personal. Once pseudonymized identifiers fall within that classification, every behavioral advertising pipeline becomes a lawful-basis problem, a transparency problem, and an access-rights problem simultaneously.
France’s enforcement velocity matters globally because other European data protection authorities have historically followed French leadership on adtech and tracking. The Irish Data Protection Commission, the Italian Garante, and the German Landesdatenschutzbeauftragte have all signaled alignment with the CNIL’s posture. Organizations that treated €40 million as a French peculiarity rather than a European baseline are now watching similar enforcement actions queue up in Dublin, Rome, and Berlin.
The AI Training Data Implications Are Retroactive
The CJEU ruling’s most consequential downstream effect may be in AI training. Over the past three years, many enterprises have constructed large training datasets from what their legal teams classified as “non-personal” data: cookie identifiers, device fingerprints, behavioral telemetry, clickstream logs, and purchase histories without names attached. Those datasets fed fine-tuning runs, recommendation engine development, and custom model training for marketing, fraud detection, and personalization use cases.
The EDPB Opinion 28/2024 already held that AI models trained on personal data cannot automatically be considered anonymous, and that each case requires demonstrated resistance to extraction and query attacks. Combined with the CJEU’s reaffirmation that pseudonymized online identifiers remain personal data, the result is a compliance reclassification: If the training data was personal, the trained model itself must be evaluated for memorization and extraction risk, and the original training likely lacked adequate lawful basis under Article 6.
This is retroactive exposure. The ANPD-adjacent Italian Garante fine against OpenAI in late 2024 (later annulled by an Italian court in March 2026, per Reuters reporting) established the regulatory template: massive fines for unlawful training data processing, with no practical remediation path short of model retraining. Organizations that built production AI capabilities on “pseudonymous” training sets are now carrying compliance debt that is difficult to discharge without architectural change. Retraining on a legally defensible data foundation is expensive. Continuing to serve models trained on now-classified personal data is exposed.
The German court ruling in November 2025 compounds the issue. As documented in the Future of Privacy Forum’s 2026 assessment, a German court held that song lyrics were “reproducibly contained and fixed in model weights,” treating models as copies for intellectual property purposes. That framing has obvious implications for personal data. If a model can reproduce training content, and if the training content contained personal data, then the model itself contains personal data — with all the obligations that classification triggers.
Why Enterprises Got Pseudonymization Wrong
The widespread enterprise assumption that pseudonymization automatically exits data from GDPR scope reflects a compliance shortcut that survived long past its useful life. Pseudonymization under GDPR Article 4(5) is defined as processing such that personal data can no longer be attributed to a specific data subject without the use of additional information. The key clause is the last one. Pseudonymized data remains personal data — it is simply personal data subject to additional safeguards. It is not anonymized data, and it never was.
The confusion arose from adtech’s rhetorical positioning. Behavioral advertising ecosystems described their data pipelines as operating on “anonymized” or “pseudonymized” data as if those terms were interchangeable. Enterprise legal teams, looking for a basis to exclude behavioral datasets from GDPR compliance overhead, accepted the framing. The CJEU has now definitively rejected it for the specific combination of identifiers and behavioral data that most enterprise adtech uses.
The Kiteworks Data Security and Compliance Risk: 2026 Data Sovereignty Report found that approximately 15% of European respondents describe themselves as “extremely concerned” about GDPR fine exposure, a number that reflects the weight of cumulative enforcement actions now exceeding €5.88 billion. Concern has not yet translated to architecture. Many organizations still run parallel data governance programs where “personal data” datasets flow through rigorous controls and “pseudonymous” datasets flow through loose controls. That architecture is now a liability.
The Access Request Battleground Is the Next Enforcement Frontier
The CJEU’s ruling did not only narrow the pseudonymization defense. It also clarified when controllers may refuse GDPR access requests as abusive — and the answer is narrower than many organizations have assumed. Controllers may treat certain requests as abusive when disproportionate burden, manifestly unfounded intent, or coordinated automation makes individual processing genuinely unreasonable. But the Court emphasized that the burden of substantiating abuse lies with the controller, and blanket refusals will not satisfy Article 12(5).
This matters because automated and mass-submitted access requests have become a legitimate enforcement tool. Privacy advocacy groups, journalists, and increasingly consumers using AI assistants to generate compliance requests are exercising Article 15 rights at volumes that strain manual response workflows. Some organizations have responded by refusing such requests categorically. The CJEU has now said that approach itself creates enforcement exposure.
The operational consequence is that data subject access request infrastructure is no longer a nice-to-have operational function. It is a regulated workflow where every refusal must be documented, every justification must be substantiable, and every timeline must meet Article 12(3) requirements. Organizations that rely on manual, ad hoc access request processes will find themselves either responding to volumes they cannot handle or generating refusal decisions that become the basis for complaints to supervisory authorities.
How Data-Layer Governance Addresses the Classification Problem
The architectural response to the CJEU ruling is not to reclassify data after the fact. It is to build governance infrastructure that treats identifier-plus-behavior combinations as personal data from the moment of collection and enforces the resulting obligations at the data layer. That means data inventories that recognize cookie IDs, device fingerprints, and behavioral fingerprints as personal data; attribute-based access controls that enforce lawful-basis limitations on who can query or export those datasets; data residency enforcement that keeps EU personal data within EU jurisdiction; and audit trails that can defend every processing decision to a supervisory authority on request.
Kiteworks operates this governance at the data layer rather than the application layer. Four capabilities are particularly relevant to the CJEU ruling. First, the Kiteworks Data Policy Engine enforces attribute-based access controls on every data interaction — data can be tagged as personal data, residency-restricted, or purpose-limited, with controls applied at access time rather than trusted to application logic. Second, DSPM integration with Microsoft Information Protection sensitivity labels lets classification flow from external data governance tools into operational policy, ensuring that cookie-ID-plus-behavior datasets retain their personal-data classification across systems. Third, geofencing and data sovereignty controls configure distributed systems to store EU personal data only within designated jurisdictions and route access only through those jurisdictions, addressing Article 44 cross-border transfer obligations without relying on case-by-case contract review. Fourth, the GDPR compliance report produces a structured evidence package that maps operational controls to GDPR articles — which is the evidentiary foundation for defending processing decisions and access request handling during supervisory audits.
The architectural argument is that classification disputes will intensify as GDPR enforcement accelerates, and the defensible position is not to argue the classification but to build controls that apply uniformly to anything that might be classified as personal data. That shifts the compliance burden from legal interpretation to operational enforcement, and it survives the next CJEU ruling, the next EDPB opinion, and the next national regulator’s guidance update.
What the Ruling Means for Your Compliance Program This Year
First, audit your data classification and inventory against the CJEU’s clarified scope. Any dataset combining online identifiers (cookie IDs, device fingerprints, advertising IDs, IP addresses, hashed emails) with behavioral signals (browsing, purchases, content interactions) must be tagged as personal data. The EDPB Guidelines 04/2022 on calculating GDPR fines treat the presence of documented technical and organizational measures as a mitigating factor — but only if the classification underlying them is correct.
Second, evaluate your AI training data lineage. For every model trained or fine-tuned on data that your compliance program classified as “pseudonymous” or “non-personal,” reassess the classification under the CJEU’s reasoning and the EDPB’s December 2024 opinion on AI model anonymity. Models trained on retroactively reclassified personal data face both a lawful-basis problem and a model-itself-as-personal-data problem.
Third, rebuild your access request infrastructure to handle volume and abuse defensibility simultaneously. Manual workflows that generate blanket refusals will generate supervisory complaints. Document every refusal decision with substantive justification. Automate fulfillment for legitimate requests. Track metrics on response time, completion rate, and refusal rate, because regulators will request those metrics during enforcement investigations.
Fourth, consolidate sensitive data exchange under unified governance. The Kiteworks Data Security and Compliance Risk: 2026 Forecast Report found that organizations running fragmented tool stacks for secure data exchange face systematically higher compliance risk and slower regulatory response times. A consolidated control plane produces uniform policy enforcement, uniform audit evidence, and uniform data residency enforcement — which is the operational equivalent of the architectural consistency the CJEU is implicitly requiring.
Fifth, treat cross-border data transfer discipline as a first-class control. The CJEU ruling is part of a longer trajectory that also includes the Data Act (enforceable since September 2025) and the EU AI Act’s phased applicability through 2027. Cross-border transfer mechanisms — adequacy decisions, SCCs, BCRs — require operational enforcement at the data layer, not just legal documentation in a DPA. Data sovereignty and geofencing controls that actually route and store EU personal data within EU jurisdiction are the controls that survive audit scrutiny.
Sixth, prepare the evidentiary package before you need it. The difference between a €40 million fine and a warning letter often comes down to the technical and organizational measures an organization can demonstrate at the moment enforcement action begins. An organization that can produce a GDPR compliance report, attribute-based access logs, data residency evidence, and audit trails of data subject request handling on twenty-four hours’ notice is in a substantially different enforcement position than one that produces the same evidence after a six-week scramble.
The CJEU ruling is not the end of the pseudonymization debate. It is the start of a new regulatory cycle in which enforcement precedes guidance, fines precede policy clarity, and the organizations that built operational governance will navigate the cycle with significantly less friction than those still debating classification in legal memos.
Frequently Asked Questions
Defensibility depends on the lawful basis and safeguards in place. The CJEU’s 2026 clarification confirms cookie IDs combined with browsing, IP, and purchase data are personal data under GDPR. Current programs relying on “pseudonymization exempts us” theories are exposed. Defensible programs require explicit consent or another Article 6 lawful basis, data minimization, transparent privacy notices, and data subject rights fulfillment infrastructure. Marketing personalization remains possible — but only with full GDPR compliance stack applied.
Yes, and the risk is material. The EDPB Opinion 28/2024 already held AI models trained on personal data cannot automatically be considered anonymous. Combined with the CJEU ruling reclassifying your training data as personal, deployed models face both lawful-basis exposure for the original training and model-itself-as-personal-data obligations. Remediation paths include retraining on legally defensible data, implementing extraction-resistance measures, or restricting deployment scope.
The ruling intensifies cross-border transfer obligations. Once cookie IDs and behavioral data are classified as personal data, any transfer outside the EU requires an Article 44 mechanism — adequacy decision, standard contractual clauses, or binding corporate rules. The EU-US Data Privacy Framework remains in force, but its applicability to adtech is contested. Organizations relying on operational data routing should implement geofencing and data sovereignty controls that keep EU personal data within the EU.
Refusal is possible but narrowly scoped. The CJEU clarification confirms controllers may treat requests as abusive when disproportionate burden or manifestly unfounded intent is substantiable. Blanket refusals fail. Required documentation includes per-request justification, evidence of actual burden or bad faith, and demonstration that legitimate requests are still processed. Organizations refusing at volume without case-by-case analysis will face complaints to supervisory authorities and likely enforcement action.
Article 32 controls must address the reclassified scope of personal data. The EDPB Guidelines 04/2022 treat implemented controls as mitigating factors in fine calculation. Core expectations include attribute-based access controls with documented lawful-basis enforcement, encryption at rest with strong key management, tamper-evident audit logs of data access and processing decisions, data residency enforcement for cross-border obligations, and structured GDPR compliance reporting. Platforms like Kiteworks provide these controls at the data exchange layer.