GDPR Regulators Aren’t Just Punishing Breaches Anymore. They’re Punishing the Security You Should Have Had.
There’s a shift happening in European data protection enforcement that every compliance and security leader needs to understand. It’s not subtle, and it changes the calculus on how you invest in data security.
Gibson Dunn’s February 2026 European data protection update summarises recent GDPR enforcement actions that make the new reality explicit. Regulators are no longer focused solely on what happened during a breach. They’re focused on what should have been in place before it. And they’re issuing fines in the tens of millions of euros for the absence of controls that, by now, they consider table stakes.
Two cases from the update illustrate the pattern. In the first, a regulator sanctioned an agency after attackers accessed personal data of individuals registered over 20 years. The findings: inadequate password policies, no multi-factor authentication, and insufficient logging and monitoring — all violations of Article 32 GDPR, the provision requiring “appropriate technical and organisational measures” to secure personal data. In the second, a telecom group was fined tens of millions of euros after attackers accessed data linked to approximately 24 million subscriber contracts. Authorities cited weak authentication, incomplete breach notifications under Article 33, and unlawful data retention under Article 5(1)(e).
The through-line is unmistakable. Regulators are penalising the structural weaknesses that enabled the breach — not just the breach itself. And the specific weaknesses they’re calling out are things most organisations could and should have fixed years ago.
5 Key Takeaways
- Regulators Are Penalising Structural Security Weaknesses, Not Just the Breaches They Cause. Gibson Dunn’s February 2026 European data protection update reveals a decisive enforcement shift. Regulators are no longer waiting for a breach to cause measurable harm before imposing fines. They are penalising the underlying structural weaknesses — inadequate authentication, insufficient logging and monitoring, unlawful data retention — that made the breach possible or increased its impact. The message: if your controls were inadequate, you’re liable regardless of whether the worst-case scenario materialised.
- One Agency Fined for Data Retained Over 20 Years — With No MFA, No Logging, and No Monitoring. Attackers accessed personal data of individuals registered over two decades. The regulator cited inadequate password policies, no multi-factor authentication, and insufficient logging and monitoring as violations of Article 32 GDPR. The organisation was sanctioned not just for the breach, but for the absence of controls that should have prevented or detected it. The fine and remediation orders reflect regulators’ view that these are baseline expectations, not aspirational targets.
- A Telecom Group Fined Tens of Millions After 24 Million Subscriber Records Exposed. Attackers accessed data linked to approximately 24 million subscriber contracts. Authorities cited weak authentication, incomplete breach notifications under Article 33, and unlawful data retention under Article 5(1)(e). The fine reached tens of millions of euros. The case demonstrates that regulators assess multiple dimensions simultaneously: did you prevent the breach, did you detect it, did you report it, and were you holding data you should have already deleted?
- Article 32 Now Clearly Requires MFA, Real-Time Logging, and Automated Retention. Across both cases, the enforcement actions define what “appropriate technical and organisational measures” under Article 32 means in practice: strong authentication including MFA for exposed accounts, real-time logging and detection of unauthorised access, and disciplined data minimisation and retention controls. These are no longer recommendations. They are the regulatory floor.
- The Burden of Proof Has Shifted — You Must Demonstrate Compliance, Not Just Claim It. Article 5(2)’s accountability principle requires organisations to prove they have appropriate measures in place. These cases make clear that regulators will examine whether you had preventive controls (MFA, access restrictions) and detective controls (logging, monitoring), whether those controls were operational at the time of the incident, and whether you can produce documentation showing it. Claiming you have “appropriate measures” without technical proof is no longer a defence.
What “Appropriate Technical Measures” Actually Means in 2026
For years, Article 32 GDPR has required organisations to implement “appropriate technical and organisational measures” to protect personal data. The language is deliberately broad. It gives organisations flexibility to determine what’s appropriate based on the risks they face.
That flexibility is narrowing. The enforcement actions highlighted by Gibson Dunn define what “appropriate” means with increasing specificity — and the bar is not high. It’s basic. It’s the security fundamentals that most compliance frameworks have recommended for years. The difference is that regulators are now treating the absence of these fundamentals as a violation, regardless of breach impact.
Strong authentication, including MFA. Both cases cited weak authentication as a violation. Passwords alone are no longer sufficient for any system handling personal data. The agency case specifically flagged the absence of multi-factor authentication. For regulators, MFA is no longer a best practice — it’s a requirement. And the expectation extends beyond basic MFA to contextual authentication: step-up verification for high-risk actions like bulk downloads or external sharing, device trust policies that restrict access to managed devices, and geolocation controls that flag access from unexpected locations.
Real-time logging and detection. The agency case cited “insufficient logging and monitoring” as a distinct violation. Regulators expect organisations to detect unauthorised access as it happens — not discover it weeks or months later during a forensic investigation. This means comprehensive audit trails that capture every personal data access event: who accessed it, what they accessed, when, from where, and how. It means real-time alerting for suspicious activities like unusual access patterns, bulk downloads, or repeated failed authentication attempts. And it means immutable logging — tamper-proof records that can’t be altered after the fact.
Disciplined data minimisation and retention. The agency case involved personal data retained for more than 20 years. The telecom case cited “unlawful retention.” Article 5(1)(e)’s storage limitation principle requires that personal data be kept only as long as necessary for the purposes for which it was collected. Regulators are now enforcing this principle with fines — not just guidance. Organisations need automated retention policies that delete personal data after specified periods, legal hold management that preserves data subject to litigation while deleting everything else, and deletion audit trails that prove to regulators the data was removed when required.
Demonstrable compliance. Article 5(2)’s accountability principle is the thread that ties everything together. It’s not enough to have appropriate measures. You must be able to prove it. When a regulator asks whether you had MFA in place at the time of the incident, you need to produce evidence showing you did — not a policy document saying you should. When they ask whether you were monitoring for unauthorised access, you need audit trails and alerting configurations, not a slide deck from last year’s security review.
A Complete Checklist of GDPR Compliance
Why Enforcement Is Targeting Infrastructure, Not Just Incidents
This enforcement trend didn’t emerge in a vacuum. Regulators have spent years publishing guidance, issuing warnings, and making expectations clear. The shift to penalising structural weaknesses reflects a conclusion that guidance alone hasn’t worked.
The numbers support that conclusion. The average data breach now costs $4.88 million globally; in healthcare, $10.93 million (IBM Cost of a Data Breach Report, 2024). GDPR fines can reach €20 million or 4% of global annual revenue. And the EU AI Act adds another layer, with fines up to €35 million or 7% of revenue for high-risk violations. With AI agents and generative AI tools increasingly processing personal data — often through channels organisations can’t fully see — the regulatory pressure to prove that baseline controls are in place is only going to intensify.
The enforcement message is also getting more sophisticated. Regulators in these cases didn’t just find a breach and impose a fine. They assessed whether the organisation had preventive controls (MFA, access restrictions) that could have stopped the breach, detective controls (logging, monitoring) that could have caught it earlier, responsive controls (breach notification) that met the 72-hour Article 33 requirement, and retention controls that would have reduced the volume of data exposed. Each missing control was treated as a separate violation. The fines compound.
What Most Organisations Still Get Wrong About GDPR Security Compliance
Here’s the uncomfortable truth that these cases expose. Most organisations claim they have “appropriate measures” in place. Most cannot prove it under regulatory scrutiny. The gap between policy and reality is where fines happen.
The authentication gap. Many organisations have MFA policies. Fewer enforce MFA consistently across every system that handles personal data. Legacy file sharing platforms, email systems, and managed file transfer tools often lack the authentication controls that regulators now expect. Consumer-grade file sharing tools offer basic authentication but rarely support contextual step-up authentication for high-risk actions, device trust verification, or geolocation controls. When regulators examine your authentication infrastructure during an enforcement action, they’re not looking at your policy document. They’re looking at your technical implementation.
The logging gap. Many organisations log some activities. Few maintain comprehensive, immutable audit trails that capture every personal data access event across every channel — file sharing, email, managed file transfer, web forms, APIs. Fragmented tools produce fragmented logs. Each system has its own log format, its own retention, its own gaps. When regulators ask for a complete record of who accessed what personal data and when, most organisations discover they can’t produce one. The agency case cited “insufficient logging” as a violation. Insufficient means incomplete, non-comprehensive, or not tamper-proof.
The retention gap. Data retention is the GDPR requirement that most organisations struggle with the most. It’s easy to write a retention policy. It’s far harder to enforce one automatically across every system where personal data lives. Email archives, file shares, backup systems, collaboration platforms — personal data accumulates across dozens of repositories. Without automated retention enforcement, data sits for years beyond its purpose. The agency case involved data kept for more than two decades. That’s not a failure of policy. It’s a failure of enforcement infrastructure.
The documentation gap. Article 5(2) requires organisations to demonstrate compliance — not just claim it. When regulators come calling, they want to see pre-built compliance reports showing MFA enforcement rates, access control configurations, and retention compliance. They want audit-ready documentation of technical and organisational measures. They want complete forensic timelines for breach notification under Article 33’s 72-hour window. And they want records of processing activities under Article 30 showing how personal data is actually handled, not how policy says it should be handled.
Closing the Gap Between Policy and Provable Compliance
The enforcement actions in Gibson Dunn’s update describe failures that are entirely preventable — if the right infrastructure is in place. Not a patchwork of separate tools for file sharing, email, managed file transfer, and web forms, each with its own authentication, logging, and retention capabilities. A unified platform that governs personal data across every channel with consistent controls and a single audit trail.
The Kiteworks Private Data Network was purpose-built for exactly this compliance challenge. It addresses each of the specific violations cited in Gibson Dunn’s cases — not as add-on features, but as foundational design principles.
For authentication, Kiteworks enforces MFA for all personal data access, with contextual step-up authentication for high-risk actions like bulk downloads and external sharing. SSO integration supports enterprise identity providers with MFA enforcement. Device trust restricts access to managed, compliant devices. Geolocation controls flag and block access from unexpected locations. These are the exact controls regulators cited as missing in the enforcement actions.
For logging and monitoring, Kiteworks provides comprehensive, immutable audit trails that capture every personal data access event — who, what, when, where, how — across every channel. Real-time alerting notifies security teams of suspicious activities immediately. AI-powered anomaly detection identifies unusual access patterns that may indicate compromise. And SIEM integration exports logs to enterprise security platforms for correlation with other security events. This is the logging and monitoring infrastructure regulators expected in both cases — and didn’t find.
For data retention, Kiteworks enforces automated retention policies that delete personal data after specified periods — with legal hold management to preserve data subject to litigation while deleting everything else. Deletion audit trails prove to regulators that data was removed when required. Data classification tags personal data by category with appropriate retention rules. This is the retention infrastructure that would have prevented the 20-year data accumulation and the unlawful retention cited in both enforcement cases.
For demonstrable compliance, Kiteworks provides pre-built GDPR-specific compliance reports, a CISO Dashboard showing real-time visibility into personal data access and policy violations, audit-ready documentation, and breach notification support with complete forensic timelines for Article 33’s 72-hour requirement. When regulators ask for proof that appropriate measures were in place, organisations using Kiteworks can produce it — because the platform generates it continuously, not retrospectively.
AI Agents Make Article 32 Compliance Harder — and More Urgent
These enforcement cases involve traditional breach scenarios — attackers compromising systems to access personal data. But the Article 32 requirements they reinforce become exponentially more challenging as organisations deploy AI agents and generative AI tools that process personal data at scale.
Every AI agent that accesses personal data creates a new identity that needs authentication and access controls. Every AI interaction with personal data needs to be logged. Every AI tool that processes personal data needs to respect retention policies. And every AI-data interaction needs to be auditable and demonstrable to regulators.
If regulators are fining organisations tens of millions of euros for missing MFA and insufficient logging in traditional systems, imagine the enforcement exposure when the same weaknesses exist in AI agent workflows — where data moves faster, at greater volumes, and with less human oversight.
The Kiteworks AI Data Gateway and Secure MCP Server extend the same Article 32 controls — authentication, logging, access governance, and retention enforcement — to AI interactions. Whether personal data is accessed by a human user through a file share or by an AI agent through an API, the controls, the audit trail, and the compliance evidence are identical. One platform. One policy engine. One immutable record.
The Enforcement Direction Is Clear. The Question Is Whether You’re Ready for It.
Gibson Dunn’s February 2026 update tells a story that compliance teams should take personally. Regulators are no longer interested in whether you have a data protection policy. They want to know whether you have the technical infrastructure to enforce it — and whether you can prove it under scrutiny.
The violations cited in these cases — no MFA, no real-time logging, no automated retention, unlawful data accumulation — are not exotic or novel. They are preventable failures of basic security hygiene. And they are exactly the failures that regulators have decided to make expensive.
The organisations that will weather this enforcement environment are the ones that can produce, at a moment’s notice, proof that MFA is enforced for every personal data access, that every access event is logged in an immutable audit trail, that retention policies are automated and auditable, and that they can reconstruct a complete forensic timeline within 72 hours of a breach.
The organisations that can’t produce that proof are the ones writing the cheques. And based on recent precedent, those cheques are getting larger.
To learn how Kiteworks can help, schedule a custom demo today.
Frequently Asked Questions
Yes — and this is the enforcement shift that Gibson Dunn’s February 2026 update makes explicit. Regulators are now penalising the structural security weaknesses that enabled or increased the likelihood of a breach, independent of actual harm. Under Article 5(2)’s accountability principle, the burden of proof sits with the organisation: you must demonstrate that appropriate controls were in place. If you lacked MFA, real-time logging, or automated data minimisation controls, that absence is a violation — not a mitigating factor — even if the breach’s impact was ultimately contained.
Based on recent enforcement, regulators treat three controls as the minimum floor under Article 32. First, strong authentication: MFA for every system that handles personal data, with contextual step-up verification for high-risk actions like bulk downloads or external sharing. Second, real-time logging: comprehensive, immutable audit trails capturing every personal data access event — who, what, when, where — with live alerting for suspicious patterns. Third, automated retention enforcement: policies that delete personal data after defined periods, with deletion records that prove compliance. Having a policy document for any of these is not the same as having the technical infrastructure to enforce them.
Article 5(1)(e) requires that personal data be retained only for as long as necessary for its original purpose — after which it must be deleted or anonymised. The Gibson Dunn enforcement cases show regulators treating excessive retention as a standalone violation: one agency held data for over 20 years; the telecom case cited “unlawful retention” separately from the breach findings. The practical implication is that organisations need automated retention schedules, not manual ones. They also need legal hold capabilities to preserve data tied to litigation while deleting everything else, and deletion audit trails that produce verifiable proof of removal when regulators ask.
The same Article 32 requirements that apply to human access — comprehensive logging, real-time monitoring, tamper-proof records — apply to any system that accesses personal data, including AI agents. The enforcement risk is actually higher with AI agents because they operate at machine speed and volume, meaning ungoverned access can expose far more data before detection. Regulators have already shown they will fine organisations for “insufficient logging” in traditional systems; the same standard extends to AI workflows. Every AI-data interaction needs to be captured in an immutable audit trail with identity, timestamp, data accessed, and action taken — and those records must be producible to a regulator under Article 5(2)’s accountability principle.
Article 33 requires notification to the relevant supervisory authority within 72 hours of becoming aware of a personal data breach. The notification must describe the nature of the breach, the categories and approximate number of data subjects and records affected, the likely consequences, and the measures taken or proposed to address it. The telecom case cited in Gibson Dunn’s update was penalised for “incomplete breach notifications” — meaning the filing didn’t meet the completeness standard. This almost always traces back to inadequate logging infrastructure: if you lack comprehensive audit trails and real-time monitoring, you cannot reconstruct what happened, who was affected, and what data was exposed within 72 hours. The SIEM integration and forensic timeline capabilities that support Article 33 compliance are inseparable from the Article 32 logging requirements.
Additional Resources
- Blog PostUnderstand and Adhere to GDPR Data Residency Requirements
- Blog PostHow to Email PII in Compliance with GDPR: Your Guide to Secure Email Communications
- Blog PostAchieve GDPR Compliance to Comply With EU’s New Data Privacy Law
- Blog PostHow to Share Files with International Partners Without Violating GDPR
- Blog PostHow to Create GDPR-compliant Forms