NVD Shift: Is Your Data Secure?

What NIST’s NVD Retreat and Claude Mythos Mean for Enterprise Data Security

In one week in April 2026, two announcements that were covered as separate stories collided into a single structural event. The U.S. National Institute of Standards and Technology formally conceded that it can no longer enrich the majority of vulnerabilities flowing into the National Vulnerability Database. One day earlier, the Cloud Security Alliance published a briefing — signed by some of the most senior cybersecurity leaders in the world — warning that Anthropic’s Claude Mythos Preview represents a step change in autonomous AI-driven vulnerability discovery. The defender’s triage system is narrowing coverage by design. The attacker’s discovery system is industrializing. The twenty-year wager that underpins enterprise vulnerability management — that defenders can patch faster than attackers can weaponize — has been lost.

Key Takeaways

  1. The NVD’s Authoritative Era Is Over. CVE submissions grew 263% between 2020 and 2025. The backlog now exceeds 30,000 entries. NIST has formally stopped trying to enrich most of them. For any organization running CVSS-based patch prioritization, the data source has gone dark for the majority of new vulnerabilities.
  2. Claude Mythos Is Not Hype. Independent evaluation by the U.K.’s AI Security Institute confirmed autonomous discovery of thousands of zero-days, working exploit generation without human guidance, and end-to-end completion of a 32-step corporate network attack simulation. The cost floor for elite offensive capability just collapsed.
  3. The Speed Math Doesn’t Work Anymore. Average time-to-exploit is now negative seven days. Average remediation for critical vulnerabilities is 74 days. The eCrime breakout time after initial access is 29 minutes. That is not a gap. That is a structural break.
  4. Application-Layer Defense Cannot Scale to What Comes Next. More scanners, more tickets, more patch windows — none of it reaches the layer where the exploit actually lands. The only durable defense is one that governs, encrypts, and audits the data itself, under controls that work regardless of which vulnerability an attacker exploits.
  5. The Proof Already Exists. During the Log4Shell crisis, organizations with hardened data-layer architectures experienced the industry-wide CVSS 10 vulnerability as something closer to a CVSS 4. The next Log4Shell will not arrive with a CVE number attached. Architecture is the only thing that will be ready for it.

The April 2026 Collision: Two Announcements, One Event

On 14 April 2026, NIST announced that the National Vulnerability Database would no longer try to enrich the majority of CVEs flowing into the system. Submissions grew 263% between 2020 and 2025. NIST enriched almost 42,000 CVEs in 2025 — a 45% year-over-year increase — and still watched the backlog climb above 30,000 unanalyzed entries. Going forward, NVD will prioritize three narrow categories: CVEs in CISA’s Known Exploited Vulnerabilities catalog, software used by the U.S. federal government, and vaguely defined “critical software”. Everything else will be flagged “not scheduled” — no severity score, no analysis, no signal that a vulnerability scanner can consume.

One day earlier, the Cloud Security Alliance had published a briefing on Anthropic’s Claude Mythos Preview, signed by former CISA Director Jen Easterly, Bruce Schneier, Chris Inglis, and Phil Venables, among dozens of other senior security leaders. Its core argument: Mythos represents “a step change” in AI-driven vulnerability discovery, and “the window between discovery and weaponization has collapsed to hours.” The U.K.’s AI Security Institute independently verified the capability, reporting that Mythos completed a 32-step corporate network attack simulation — reconnaissance through full takeover — outperforming every other AI system tested.

The convergence is the story. The centralized intelligence system that defenders rely on is contracting at exactly the moment the autonomous discovery system attackers can use is expanding. For organizations that built their security programs around CVE enrichment, triage, and SLA-driven patching, the operating assumptions have quietly changed.

What “Not Scheduled” Actually Means for Your Security Program

The phrase “not scheduled” sounds bureaucratic. The operational reality is more serious than it reads.

Dustin Childs, who leads threat awareness at Trend Micro’s Zero Day Initiative, told CSO Online what the announcement means in plain language: NIST has “publicly stated, ‘We are never going to get through this backlog.'” The Forum of Incident Response and Security Teams forecasts 59,427 CVEs in 2026, up from just over 48,000 in 2025, with modeled scenarios that exceed 100,000 — and that modeling was completed before the broader disclosure wave that Mythos-class tools are expected to trigger.

The quality problem is worse than the quantity problem. Dragos’s 2026 OT/ICS Year in Review documented that 15% of CISA and NVD CVEs had incorrect CVSS scores in 2025 — and 64% of those corrections adjusted severity upward because vendors had understated risk. Twenty-five percent of public advisories contained no patch or mitigation guidance at all. NIST has formalised an erosion already well under way.

For organizations running CVSS-based patch management — which is most organizations — the data source their tools depend on is about to go dark for the majority of new vulnerabilities. For U.S. organizations operating under SEC cyber disclosure obligations, for federal contractors under a href=”/risk-compliance-glossary/cmmc/”>CMMC and DFARS, for healthcare organizations under HIPAA‘s risk analysis requirements, the documentation problem is non-trivial: How do you demonstrate a risk-based prioritization process when the risk signal itself is incomplete?

Why Claude Mythos Changes the Attacker’s Economics

The easy reaction to Mythos is to dismiss it as hype. The Cloud Security Alliance, AISI, and an unusually broad bench of former government cyber leaders reached the same conclusion independently: The class of capability is real, and the economics are now inverted.

The AISI evaluation documented four Mythos Preview capabilities worth internalizing. First, autonomous discovery of thousands of high- and critical-severity vulnerabilities across every major operating system and browser — produced not through specialised offensive training but as a downstream consequence of improvements in code reasoning and autonomy. Second, working exploit generation without human guidance. Third, successful exploitation of weakly defended systems once access was obtained. Fourth, end-to-end completion of a 32-step corporate network attack simulation that previously took a skilled human red-teamer roughly 20 hours. Mythos even surfaced a 17-year-old remote code execution vulnerability in FreeBSD’s NFS server and exploited it autonomously in about four hours.

The CrowdStrike 2026 Global Threat Report provides ambient context. Zero-day exploits grew 42% year-over-year. AI-enabled adversary attacks grew 89%. The average eCrime breakout time after initial access is 29 minutes. That data was collected before Mythos-class capability reached broad access. Gadi Evron, CISO-in-Residence for AI at the Cloud Security Alliance, told CSO Online: “The storm of vulnerability disclosures from Project Glasswing is the first of many large waves.”

The asymmetry is the architectural fact of cybersecurity in 2026. The defender’s triage system is narrowing by design. The attacker’s discovery system is industrializing. Patch cycles measured in weeks cannot outrun exploit cycles measured in hours.

The Dishonest Defense: Why Doing More of the Same Is Now the Riskiest Strategy

Faced with this collision, many security programs will default to a predictable response: more scanners, more tickets, tighter patch windows, more dashboards. This is the dishonest defense. It treats the problem as one of execution — we just need to patch better — when the problem is structural.

Three structural facts make that approach inadequate. First, the CVE system feeding the execution layer is no longer comprehensive. When NVD flags something “not scheduled,” your scanner’s prioritization engine receives no signal. Second, AI-driven discovery tools don’t wait for NVD enrichment before attackers weaponize their findings. Third, the average time to remediate critical vulnerabilities remains 74 days — a window that was already unsustainable when time-to-exploit was positive fourteen days, let alone negative seven.

The regulatory environment sharpens the problem. SEC cyber disclosure rules, HHS enforcement of the HIPAA Security Rule, and the FTC Safeguards Rule all hold organizations accountable for “reasonable” or “appropriate” technical safeguards — and pointing at an incomplete NVD record is not a defense of inadequate controls. HHS Office for Civil Rights HIPAA enforcement exceeded $100 million in 2024, with penalties tied directly to inadequate access controls and encryption. The NSA’s April 2024 Cybersecurity Information Sheet on the Data Pillar states plainly that perimeter defenses alone are insufficient and that adversaries who gain a foothold often gain unfettered access to all data.

The honest defense begins with a harder question. If you cannot assume you will know about every exploitable vulnerability before it is weaponized, what does enterprise security actually mean? The answer is that defense has to move down a layer. The asset itself has to carry its own protection.

The Architectural Answer: Data-Layer Governance

<pData-layer governance at the layer of the data itself is the architectural posture that remains effective when application-layer controls fail. It protects sensitive data through embedded controls that work regardless of which CVE an attacker exploits, which agent is compromised, or which vulnerability was never scored.

Attribute-Based Access Control at the Content Level. Policies are enforced on the data itself — based on attributes of the user, the data, the time, the purpose, and the context — rather than at the network perimeter or the application boundary. A file that should only be accessed by authorized staff within a specific geography and time window carries that policy with it, whether it sits in a file share, an email attachment, or an AI query context.

FIPS 140-3 Encryption with Customer-Managed Keys. Sensitive data is encrypted at rest and in transit with cryptographic modules validated to the current federal standard. Customer-managed keys backed by hardware security modules ensure the organization — not the cloud provider, not the SaaS vendor, not the AI model — controls access.

Tamper-Evident Audit Logging. Every interaction with sensitive data generates a normalised log entry delivered to SIEM in real time, with no throttling and no hidden delays. When a breach occurs, forensic reconstruction does not require knowing which specific CVE was exploited; the data trail is complete.

Zero-Trust Access for Humans, Services, and AI Agents. Every request is authenticated, authorized, purpose-limited, time-bound, and logged — regardless of whether the caller is a human user, a service account, or an AI agent. A prompt-injected AI agent cannot exfiltrate data it was never authorized to see in the first place.

Hardened Architecture, Not Customer-Hardened Configuration. The platform itself is delivered as a hardened virtual appliance with embedded firewall, WAF, and intrusion detection. Single-tenant isolation means cross-tenant failure modes that devastate multi-tenant cloud services cannot occur.

The combination matters more than any single element. A breach of the application becomes a breach of the container, not the contents.

The Kiteworks Approach: Architecture Over Aspiration

Kiteworks was built for exactly this moment — though the moment has only just arrived.

The Kiteworks platform implements zero-trust principles at the data layer, not the network or application layer. It consolidates sensitive content channels — secure email, file sharing, managed file transfer, SFTP, data forms, virtual data rooms, APIs, and next-generation DRM — under a single governance framework with unified policy enforcement. Every operation is evaluated by a Data Policy Engine that triangulates user identity, data sensitivity, and intended action before permitting access. Every interaction is logged in a consolidated audit trail that delivers to SIEM in real time with zero throttling.

The Log4Shell proof point is the concrete evidence that this architecture works under maximum stress. When the industry confronted a CVSS 10 vulnerability in December 2021, Kiteworks customers experienced it as something closer to a CVSS 4. The hardened virtual appliance architecture, single-tenant isolation, double encryption at rest, and assume-breach internal design meant that the worst library vulnerability of that decade could not reach the data it was supposed to expose.

For the AI era, Kiteworks extends the same governance pattern to AI agent interactions. Kiteworks Compliant AI enforces the same ABAC policies, FIPS 140-3 encryption, and tamper-evident audit logging for every AI agent interaction with regulated data. The Kiteworkss Secure MCP Server enables AI clients to connect through OAuth 2.0 authentication, ensuring credentials never reach language models. The Kiteworks Data Security and Compliance Risk: 2026 Forecast Report details how regulated-industry AI use cases are driving demand for this architectural pattern.

Data-layer governance at the layer of the data is not a feature set. It is a design posture — one that remains effective precisely because it does not depend on perfect visibility into the threat landscape above it.

What Organizations Should Do Now

First, accept that application-layer security, while still necessary, is no longer sufficient as a primary defense strategy. The volume of discoverable vulnerabilities is about to accelerate beyond any organization’s capacity to respond, and NVD’s narrowing coverage means fewer of those vulnerabilities will come with a severity score. Architectures must shift toward protecting data independently of the applications that process it. The Kiteworks Data Security and Compliance Risk: 2026 Forecast Report found only 17% of organizations report fully implemented AI governance frameworks; the control gap is architectural as well as procedural.

Second, conduct a comprehensive data discovery and classification exercise. The 2026 Thales Data Threat Report found that only 33% of organizations have complete knowledge of where their sensitive data resides. You cannot protect what you cannot find.

Third, implement data-layer encryption with customer-managed keys. Data must be encrypted at rest and in transit with keys the organization controls — not the cloud provider, not the SaaS vendor, not the AI model. FIPS 140-3 validated cryptographic modules and hardware security modules should be the default.

Fourth, deploy attribute-based access controls that travel with the data. Static role-based access fails when data moves across organizational boundaries, into AI workflows, or through third-party ecosystems. Access policies must be embedded within the data itself so they are enforced regardless of where the file is opened.

Fifth, govern AI data access with the same rigor applied to human access. The CrowdStrike 2026 Global Threat Report documented an 89% increase in AI-enabled adversary attacks. Every AI interaction with sensitive data must be authenticated, authorized, logged, and auditable — at the data layer, not the model layer.

Sixth, stop designing around CVE enrichment as if it is still a reliable signal. Layer threat-informed prioritization — CISA KEV, exploit prediction scoring, vendor advisories, direct threat intelligence — on top of data-layer controls that reduce the blast radius when prioritization misses something.

The organizations that move now will be the ones still defensible in 2027. The window to rebuild the operating model is narrow, and regulatory pressure will continue rising as the disclosure wave from Mythos-class tools begins to crest.

Frequently Asked Questions

Data-layer security protects the data itself through encryption, attribute-based access controls, persistent policies, and customer-managed keys that travel with the file regardless of which application processes it. Application-layer security protects the software that handles the data. The difference matters because application-layer controls fail when a new vulnerability is discovered. Data-layer controls remain effective regardless of which CVE an attacker exploits. NIST SP 800-207 frames zero trust as “primarily focused on data and service protection,” and the CISA Zero Trust Maturity Model makes Data one of its five pillars.

CVE submissions grew 263% between 2020 and 2025, reaching roughly 48,000 in 2025, with FIRST forecasting 59,427 in 2026 and modeled scenarios exceeding 100,000. The NIST enrichment process could not scale to match. The backlog exceeds 30,000 entries. Starting in April 2026, NIST prioritizes three narrow categories and flags everything else as “not scheduled.” For vulnerability management programs that depend on NVD-assigned CVSS scores for prioritization, this means a widening blind spot. The practical response is to supplement NVD with CISA KEV data, vendor advisories, exploit prediction scoring, and threat intelligence — and to reduce reliance on application-layer patching as the primary defense.

The CSA briefing was signed by Jen Easterly, Bruce Schneier, Chris Inglis, and Phil Venables — former CISA and government cyber leaders with no commercial incentive to overstate. The U.K.’s AI Security Institute independently verified Mythos Preview’s capabilities, including completion of a 32-step corporate network attack simulation that previously required 20 hours of skilled human work. The question is not whether Mythos itself has produced thousands of confirmed CVEs in the wild. It is whether this class of capability exists. The independent evaluations say yes. The appropriate response is to assume Mythos-class tools will proliferate rapidly and design security accordingly.

No. Patching remains necessary. The point is that patching is no longer sufficient as a primary defense. The CrowdStrike 2026 Global Threat Report measured 29-minute average breakout times and a 42% increase in zero-day exploits, while the average time to remediate critical vulnerabilities sits at 74 days. Data-layer security operates underneath patching, protecting the asset when patches arrive late or never. It is a complementary investment, not a substitute.

Frame it as risk reduction ROI. The IBM 2025 Cost of a Data Breach Report recorded a U.S. average breach cost of $10.22 million. Gartner projects that 75% of organizations running GenAI initiatives will reprioritise toward unstructured data security by 2026. The strongest proof point is Log4Shell: Organizations with hardened data-layer architectures experienced the industry-wide CVSS 10 as something closer to a CVSS 4, because the exposure could not reach the data. The board conversation is not “buy something new.” It is “our current model assumes we can patch faster than AI can find zero-days and faster than NIST can score them. Neither assumption is true anymore.”

Get started.

It’s easy to start ensuring regulatory compliance and effectively managing risk with Kiteworks. Join the thousands of organizations who are confident in how they exchange private data between people, machines, and systems. Get started today.

Table of Content
Share
Tweet
Share
Explore Kiteworks