
PromptLock: AI Ransomware That Writes Its Own Attack Code in Real Time
First-documented AI-powered Ransomware
On August 27, 2025, ESET malware researchers Anton Cherepanov and Peter Strýček discovered a significant development in ransomware technology. Approximately 18 hours after samples appeared on VirusTotal, the researchers announced their findings through multiple channels including LinkedIn — they had discovered PromptLock, the world’s first documented AI-powered ransomware that generates its own attack code.
This wasn’t just another ransomware variant with better encryption or cleverer distribution methods. PromptLock represents an important evolution in ransomware technology. While traditional ransomware follows pre-programmed patterns that security teams can learn to detect and block, PromptLock writes its own attack code in real-time, creating unique variations for every victim.
The stakes are significant. Current ransomware statistics show devastating impacts on organizations worldwide, with recovery times stretching to weeks and costs mounting into millions. PromptLock operates at machine speed, potentially encrypting entire networks in minutes rather than hours.
In this comprehensive analysis, we’ll examine PromptLock’s technical architecture, explain why traditional defenses face new challenges, and explore how organizations can protect themselves against this emerging threat.
PromptLock Decoded: Understanding the First Documented AI-Powered Ransomware
What is PromptLock ransomware?
PromptLock is the first documented ransomware strain that uses artificial intelligence to generate malicious code in real-time, discovered by ESET researchers in August 2025. Unlike traditional ransomware that follows predetermined attack patterns, PromptLock leverages OpenAI’s gpt-oss:20b model to create unique attack strategies for each target, making it difficult to detect using conventional signature-based security tools.
Key Takeaways
- AI Ransomware Generates Unique Attack Code for Each Victim
PromptLock uses OpenAI’s gpt-oss:20b model to generate unique Lua scripts in real-time for each target, making traditional signature-based detection ineffective. Unlike traditional ransomware with static code, PromptLock creates entirely new attack patterns for every victim through its integration with the Ollama API for local execution. While this represents a significant evolution in ransomware technology, it’s important to note that PromptLock is currently a proof-of-concept rather than an active threat in the wild.
- Machine Speed Attacks Create Response Time Challenges
PromptLock can potentially encrypt networks in minutes rather than hours, operating faster than typical human response times of 15-30 minutes. This speed differential creates challenges for Security Operations Centers that rely on human analysts to investigate and respond to threats. However, the exact execution time would depend on various factors including network size, security controls in place, and system configurations.
- Security Control Gaps Increase Vulnerability
Research from Kiteworks reveals that only 17% of organizations have implemented technical controls to prevent employees from uploading sensitive data to public AI tools, with the remaining 83% relying on training, warnings, guidelines, or having no policies at all. Additionally, 27% of organizations report that more than 30% of information sent to AI tools contains private data. These gaps in technical controls create vulnerabilities that sophisticated threats could potentially exploit.
- Research Shows AI’s Attack Potential
Carnegie Mellon University research, conducted with Anthropic, demonstrated that Large Language Models equipped with appropriate tools achieved attack success rates between 48-100% in controlled experimental environments against enterprise networks. The research used an abstraction layer called Incalmo to translate AI intentions into technical commands, which appears similar to PromptLock’s approach of using AI to generate executable scripts.
- Current Status Provides Window for Preparation
PromptLock remains a proof-of-concept with incomplete features, debugging code, and unimplemented destruction functionality, with no evidence of deployment in actual attacks. This provides organizations with a unique opportunity to prepare defenses before such threats become operational. The rapid disclosure by ESET researchers (within 18 hours of discovery) suggests the potential for this type of threat to evolve, making proactive security measures important even though the immediate risk remains theoretical.
Technical Architecture
At its core, PromptLock represents a convergence of AI capabilities with malicious intent. The ransomware is written in Go, a programming language chosen for its cross-platform compatibility and efficient performance. This allows PromptLock to target Windows, Linux, and macOS systems — a detail that should concern any organization running mixed environments.
The ransomware operates through integration with the Ollama API, which enables local execution of the AI model. This is crucial for several reasons. First, it means PromptLock doesn’t need internet connectivity to function once it infiltrates a network. The AI runs entirely on the victim’s infrastructure, making it harder to detect through network monitoring. Second, local execution ensures faster response times — the AI can analyze the environment and adapt its attack strategy in milliseconds rather than waiting for cloud-based processing.
The ransomware generates Lua scripts dynamically using hard-coded prompts, creating attack code that’s unique to each environment it encounters. It uses the SPECK 128-bit encryption algorithm, which was developed by the NSA.
Attack Process
Dynamic Code Generation forms the foundation of PromptLock’s approach. When the ransomware first executes, it doesn’t immediately start encrypting files. Instead, it performs reconnaissance on the system, identifying its configuration, capabilities, and potential vulnerabilities. Based on this reconnaissance, the AI generates custom Lua scripts designed specifically for the target environment. No two attacks look the same at the code level, even if the end result — encrypted files and ransom demands — remains consistent.
Script Generation and Execution showcases PromptLock’s capabilities. PromptLock prompts an AI language model to generate malicious Lua scripts at the time of infection, allowing each attack instance to create custom logic suited to the specific compromised device’s operating system and file structure. This method enables the malware to flexibly select files for exfiltration or encryption based on what is discovered during real-time reconnaissance of the local system.
Dynamic Evasion represents PromptLock’s approach to avoiding detection. The AI-generated scripts offer dynamic evasion from static analysis and signature-based detection, since each infection’s payload is newly generated and not pre-packaged. However, once deployed, the scripts execute their predetermined logic without further modification or learning capabilities.
Current Status: Proof-of-Concept
Current evidence indicates PromptLock is a proof-of-concept rather than active malware. The samples uploaded to VirusTotal from the United States show characteristics of a work-in-progress: incomplete features, debugging code, and experimental functions. ESET’s analysis indicates the core functionality — file exfiltration and encryption — works, but the destruction functionality appears not yet to be implemented. No evidence exists of PromptLock being deployed in actual attacks.
Traditional Security Faces New Challenges
Signature-Based Security Challenge
For decades, antivirus software has protected organizations by recognizing known threats. This approach works like a most-wanted poster system — security tools maintain databases of malware signatures and block anything matching these patterns. PromptLock presents challenges to this model.
Traditional ransomware operates with consistent patterns. When security companies discover a new variant, they analyze its code, extract unique identifiers, and distribute these signatures to their customers’ security tools. From that point forward, any system with updated signatures can detect and block that specific ransomware.
PromptLock complicates this model significantly. Because it generates unique code for each victim, there’s no consistent signature to detect. Security tools can’t pattern-match against something that has no pattern. Even if defenders capture and analyze one instance of PromptLock, that knowledge provides limited protection against the next attack.
Human Speed Problem
Security Operations Centers (SOCs) measure their effectiveness by incident response times, with well-trained teams responding to threats within minutes. This might sound fast, but it’s challenging in the age of AI-powered attacks.
PromptLock operates at machine speed, potentially completing its entire attack chain in minutes. By the time a human analyst reviews the first alert, PromptLock may have already completed reconnaissance and begun encryption. Human-speed defense faces challenges countering machine-speed offense.
This speed differential creates cascade effects throughout security operations. Traditional incident response playbooks assume defenders have time to observe, orient, decide, and act. PromptLock compresses these phases into seconds, requiring automated responses that can match its pace.
The Tool Fragmentation Crisis
According to the Carnegie Mellon research, modern enterprises struggle with fragmented security infrastructures that create visibility gaps. These gaps become potential exploitation opportunities for AI-powered threats.
Kiteworks’ research reinforces this vulnerability, specifically regarding preventing employees from uploading sensitive data to public AI tools. The breakdown shows:
- 40% of organizations use training and audits
- 20% use warnings only
- 10% have guidelines
- 13% have no policies
- Only 17% have implemented technical controls
This fragmented approach to security creates conditions where sophisticated threats like PromptLock could potentially thrive.
Carnegie Mellon Research Connection
The Carnegie Mellon study, conducted in concert with Anthropic, demonstrates that Large Language Models (LLMs) equipped with appropriate tools achieved attack success rates between 48% and 100% against enterprise networks. The research successfully replicated scenarios including the 2017 Equifax breach.
The key innovation that enabled these high success rates was an abstraction layer called Incalmo, which translated high-level AI intentions into technical commands. PromptLock appears to implement a similar approach, using its AI model to generate Lua scripts that execute specific attack actions.
Evolving Threat Landscape
Evolution Timeline
The path to PromptLock followed a gradual evolution in AI-powered cyber threats. According to KPMG’s Q1 2025 AI Pulse Survey, organizations are rapidly accelerating from experimentation to piloting AI agents, with piloting up from 37% to 65%. However, full deployment remains at 11%.
In 2023, attackers began using AI as a productivity tool, leveraging ChatGPT and similar platforms to write more convincing phishing emails. By 2024, the integration deepened. Attackers started using AI to analyze stolen data, identify valuable targets, and optimize their campaigns. AI has already been used extensively for social engineering and phishing, with several threat groups using AI for reconnaissance and attack planning.
August 2025 marks an important milestone with PromptLock representing malware that can generate its own attack code. While significant, this represents evolution rather than revolution in the threat landscape.
Industry-Specific Risks
Healthcare organizations face significant risks from PromptLock. The ransomware’s AI-generated scripts could be customized to target healthcare-specific file types and systems. The HIPAA implications multiply these concerns. PromptLock’s ability to intelligently select files for exfiltration based on system reconnaissance could result in targeted theft of sensitive patient records.
Kiteworks found that 27% of organizations leak private data to AI systems, with more than 30% of that information containing sensitive data including medical records. This existing vulnerability creates entry points for threats to exploit.
Financial Services confront different but equally severe threats. PromptLock’s custom script generation could be tailored to identify and target financial data repositories. The real-time nature of financial operations makes them particularly vulnerable to rapid ransomware attacks.
The KPMG data shows that 74% of leaders prioritize data privacy and security when choosing AI providers, reflecting deep awareness of these risks in the financial sector.
Manufacturing faces threats to both IT and operational technology (OT). PromptLock’s adaptable script generation could potentially identify connections between IT and OT systems. PromptLock’s design includes exfiltration capabilities before encryption, potentially enabling theft of sensitive research and competitive advantages.
Accessibility Challenge
Perhaps the most troubling aspect of PromptLock is how it democratizes advanced persistent threat (APT) capabilities. Previously, executing sophisticated multi-stage attacks required years of experience and deep technical knowledge. PromptLock changes this equation. Once operational, it could allow relatively unskilled attackers to launch more sophisticated attacks.
AI Security Challenge
The current state of AI security provides context for why PromptLock represents a significant threat. Kiteworks’ research reveals specific challenges in preventing employees from uploading sensitive data to public AI tools:
- 40% of organizations use training and audits
- 20% use warnings only
- 10% have guidelines
- 13% have no policies
- Only 17% have implemented technical controls
Building Defense Against AI-Powered Threats
Security Challenge
PromptLock’s AI-driven nature represents an advanced threat that challenges traditional security systems. Its ability to generate unique code for each victim, combined with local AI execution, requires new defensive approaches.
Multi-Layered Security Architecture
Modern security platforms must provide comprehensive capabilities including:
AI-Powered Anomaly Detection for Data Protection:
- Advanced AI algorithms that learn normal data access patterns
- Behavioral analytics to identify unusual data movements
- Machine learning models that detect anomalies in content usage
- Real-time alerts when usage deviates from baselines
Core Security Infrastructure:
- Hardened systems with embedded security controls
- Integration with Advanced Threat Protection (ATP) solutions
- Built-in antivirus and DLP capabilities
- Zero-trust architecture principles
Securing AI’s Access to Enterprise Data
As organizations increasingly rely on AI systems, securing how these systems access internal data becomes critical. Key capabilities should include:
- Secure AI Data Access: Creating secure bridges between AI systems and enterprise repositories using zero-trust principles
- Data Governance & Compliance: Enforcing policies and ensuring compliance with GDPR, HIPAA, and other regulations
- End-to-End Encryption: Protecting data both at rest and in transit
- Comprehensive Audit Trails: Detailed logging of all AI data access patterns
- API Integration: Seamless integration with existing AI infrastructure
Comprehensive Defense Strategies
Given PromptLock’s nature, defending against such threats requires leveraging multiple security layers:
- Detection: AI-anomaly detection identifies unusual patterns indicating reconnaissance or encryption activities
- Prevention: Zero-trust architecture and access controls limit attack vectors
- Containment: Network segmentation and automated responses contain threats
- Recovery: Secure backup and resilience strategies ensure continuity
Future-Proofing: Staying Ahead of AI Evolution
Emerging Threat Preparation
The rapid evolution from limited to widespread AI piloting shows how quickly the landscape changes. Organizations must prepare for:
- Increasingly sophisticated AI-powered attacks
- Coordinated threats using multiple AI agents
- Attacks targeting AI systems themselves
- Evolution of proof-of-concepts like PromptLock into active threats
Continuous Improvement Framework
With AI adoption accelerating and security concerns growing significantly, continuous improvement is essential:
- Regular security assessments focusing on AI risks
- Updates to threat detection models
- Ongoing training on evolving threats
- Regular testing of incident response procedures
Conclusion: The AI Security Evolution
PromptLock represents an important development in the ransomware landscape—a proof-of-concept that demonstrates how AI can be used to generate unique attack code. While not currently active in the wild, its emergence highlights the evolving nature of cyber threats.
The research from Carnegie Mellon University and Anthropic demonstrated that AI can achieve significant attack success rates against enterprise networks. PromptLock shows how these academic findings could be applied in practice.
The window for preparation remains open, as PromptLock has not been deployed in actual attacks. However, the trend toward AI-powered threats continues to accelerate.
The data from Kiteworks and KPMG paint a clear picture: many organizations face challenges in securing AI implementations. With most relying on training rather than technical controls and deployment remaining at 11% despite rapid piloting growth, the security landscape continues to evolve.
Moving forward requires adapting security approaches to address AI-powered threats. Organizations should consider shifting from reactive to proactive postures and from fragmented tools to unified architectures.
The question isn’t whether more AI-powered ransomware will emerge—the trend suggests they will. The question is whether organizations will be prepared when proof-of-concepts like PromptLock evolve into active threats.
Frequently Asked Questions
PromptLock uses artificial intelligence (OpenAI’s gpt-oss:20b model) to generate unique Lua scripts for each victim in real-time, making traditional signature-based detection ineffective. Unlike traditional ransomware that uses pre-written, static code, PromptLock creates custom attack code tailored to each specific environment it encounters. However, it’s important to note that PromptLock is currently a proof-of-concept discovered by ESET researchers, not an active threat in the wild.
PromptLock could potentially operate at machine speed, completing attacks in minutes rather than the hours typical of human-operated ransomware. The AI generates custom scripts based on system reconnaissance and executes them without human delays. However, actual execution time would depend on factors like network size, security controls, and system configurations. Since PromptLock hasn’t been deployed in real attacks, these timeframes are theoretical based on its technical capabilities.
Traditional signature-based antivirus tools would face significant challenges detecting PromptLock because it generates unique code for each attack with no consistent pattern to match against. However, behavioral detection systems might identify suspicious activities like mass file encryption or unusual system access patterns. Since PromptLock is currently a proof-of-concept and not in active use, most organizations aren’t at immediate risk, but the threat highlights limitations in signature-based security approaches.
While PromptLock could theoretically target any organization, healthcare, financial services, and manufacturing face particular risks due to their sensitive data and operational requirements. Healthcare organizations handle patient records subject to HIPAA regulations, financial firms manage real-time transactions, and manufacturers protect valuable intellectual property. However, since PromptLock remains a proof-of-concept, these risks are currently theoretical rather than immediate threats.
Protection requires multi-layered defenses including behavioral analytics that detect suspicious activities regardless of code patterns, network segmentation to limit potential damage, air-gapped backups that ransomware cannot reach, and technical controls for AI tool usage. Kiteworks research shows only 17% of organizations have implemented technical controls to prevent data exposure to AI tools, highlighting a critical gap. While PromptLock isn’t currently active, organizations should use this window to strengthen defenses against future AI-powered threats.
Additional Resources
- Blog Post Zero Trust Architecture: Never Trust, Always Verify
- Video How Kiteworks Helps Advance the NSA’s Zero Trust at the Data Layer Model
- Blog Post What It Means to Extend Zero Trust to the Content Layer
- Blog Post Building Trust in Generative AI with a Zero Trust Approach
- Video Kiteworks + Forcepoint: Demonstrating Compliance and Zero Trust at the Content Layer