AI Security Threats Surge: Protecting Against Prompt Injections

AI Security Threats Surge: Protecting Against Prompt Injections

Organizations across the United States and Europe are confronting an alarming reality: artificial intelligence applications have become prime targets for cybercriminals, and most security teams lack the visibility needed to protect these systems effectively.

Key Takeaways

  1. Prompt Injection Attacks Dominate AI Threat Landscape. Seventy-six percent of organizations report prompt injection attacks as their primary AI security concern, surpassing vulnerable code and jailbreaking attempts. Security teams must develop specialized detection and prevention capabilities specifically designed to counter these manipulation techniques.
  2. Shadow AI Creates Critical Visibility Gaps. Sixty-three percent of security practitioners cannot identify where LLMs operate within their organizations, creating blind spots that prevent effective protection. Organizations must implement discovery processes and governance frameworks to track AI adoption across all departments and systems.
  3. Security Integration Occurs Too Late in Development. Only 43% of organizations build AI applications with security capabilities from the start, while 34% involve security teams before development begins. Earlier security involvement through shift-left practices can prevent vulnerabilities rather than requiring costly remediation after deployment.
  4. AI Adoption Outpaces Security Capabilities. Sixty-one percent of new enterprise applications now incorporate AI components, with 70% of AI-related APIs accessing sensitive data. The speed of AI adoption continues to exceed security teams' ability to implement adequate controls and monitoring.
  5. Cultural Barriers Impede Security Collaboration. Seventy-four percent of organizations report that developers view security as an obstacle to AI innovation rather than an enabler. Breaking down this cultural divide requires demonstrating how security practices support safe innovation and competitive advantage.

A comprehensive survey of 500 security practitioners and decision-makers reveals that cyberattacks targeting AI applications are increasing at a troubling pace. The research, conducted by Traceable by Harness, exposes significant gaps in how organizations build, deploy, and secure AI-powered systems.

Prompt Injection Attacks Lead Security Threats

The survey identifies three dominant attack vectors targeting AI systems. Prompt injections involving large language models top the list at 76%, representing the most prevalent threat organizations face today. These attacks manipulate LLM inputs to extract sensitive information or produce harmful outputs that bypass intended safety controls.

Vulnerable LLM code follows closely at 66%. Many language models generate code based on examples scraped from across the web, including flawed implementations that contain security weaknesses. When developers incorporate this AI-generated code without proper review, they inadvertently introduce vulnerabilities into production systems.

LLM jailbreaking rounds out the top three threats at 65%. These attacks attempt to circumvent the safety guardrails built into language models, potentially causing them to produce inappropriate, dangerous, or malicious content.

Shadow AI Creates Unprecedented Visibility Challenges

Perhaps the survey's most concerning finding centers on organizational awareness. A striking 63% of security practitioners admit they have no way to identify where LLMs operate within their organization. This blind spot creates substantial risk, as security teams cannot protect systems they cannot see.

Three-quarters of respondents expect shadow AI—the unauthorized deployment of AI tools without IT oversight—to eclipse the security problems previously caused by shadow IT adoption. This represents a significant escalation in supply chain risk management, as AI systems typically process sensitive data and make consequential decisions.

Nearly three-quarters of organizations (72%) acknowledge that shadow AI represents a gaping chasm in their security posture. The decentralized nature of AI adoption, combined with the ease of integrating AI capabilities through APIs, has created an environment where applications using LLMs proliferate faster than security teams can track them.

AI Applications Mark New Territory for Cybercriminals

The overwhelming majority of respondents (82%) recognize that AI applications constitute new frontier for cybercriminals. This acknowledgment reflects a sobering reality: traditional security approaches often prove inadequate against AI-specific threats.

Seventy-five percent of security professionals admit that AI risk and security threats represent challenges they have never encountered before. The unique characteristics of AI systems—their probabilistic outputs, complex training data dependencies, and novel attack surfaces—require security teams to develop new skills and strategies.

Adam Arellano, Field CTO for Traceable by Harness, points to a fundamental problem: many AI applications are not being built and deployed using established application security practices. This gap between AI innovation speed and security maturity creates opportunities for exploitation.

Developers Build AI Without Security Input

The survey reveals troubling patterns in how organizations integrate security into their AI development processes. Only 43% of respondents confirm that application developers consistently build AI applications with security capabilities from the start. This means more than half of AI projects lack security by design.

Communication between development and security teams appears even more problematic. Just over one-third (34%) of security teams learn about AI projects before developers start building applications. This late involvement limits security teams' ability to influence architecture decisions that could prevent vulnerabilities.

More than half (53%) of respondents report that security teams receive notification before applications deploy to production environments. However, 14% indicated their teams only learn about new AI applications after deployment or following a security incident—far too late to prevent potential breaches.

Critical Visibility Gaps Persist Across AI Components

The survey highlights two specific areas where organizations lack essential visibility. Most security teams (63%) cannot access real-time information about the software bill of materials for AI components, known as AI-BOMs. Without this inventory, teams cannot assess which AI components contain known vulnerabilities or require updates.

Similarly, 60% lack visibility into LLM model outputs. This blind spot prevents security teams from detecting when models produce problematic responses, whether through malicious prompts, model manipulation, or unintended behavior.

Cultural Friction Slows Security Integration

Nearly three-quarters (74%) of respondents report that application developers view security issues and concerns as obstacles to AI innovation. This perception creates friction that can discourage developers from engaging security teams early in the development process.

This cultural divide undermines security efforts. When developers see security as an impediment rather than an enabler, they may avoid involving security teams until problems emerge. Breaking down this barrier requires demonstrating how security practices can support innovation rather than hinder it.

AI Adoption Accelerates Faster Than Security Capabilities

The pace of AI adoption compounds these security challenges. Sixty-one percent of new enterprise applications now incorporate AI components from their initial design. This means organizations are building AI functionality into the majority of their new systems, dramatically expanding the attack surface.

Seventy percent of respondents note that APIs used to invoke LLMs access sensitive data. This finding underscores the stakes involved—AI security failures can expose confidential information, customer data, intellectual property, and other critical assets.

Arellano emphasizes that the question is no longer whether cybersecurity incidents involving AI applications will occur, but rather what level of severity they will reach. The rate at which organizations develop AI applications continues to outpace security teams' ability to secure them effectively.

AI-Generated Code Creates Vulnerability Cycles

Most current LLMs used for code generation were trained on examples pulled from across the web. This training data inevitably includes flawed implementations containing security vulnerabilities. When LLMs generate code based on these patterns, they can reproduce or create similar weaknesses.

Without established best practices for governing AI coding tools, increased code generation translates directly into more vulnerabilities. Organizations must find and remediate these weaknesses before attackers exploit them—a race that security teams are struggling to win.

The challenge intensifies because cybercriminals now use AI tools to discover vulnerabilities faster than ever before. Adversaries can leverage automation to identify and exploit weaknesses at machine speed, compressing the window security teams have to respond.

Moving Forward Despite Uncertainty

Organizations cannot simply abandon AI applications despite these security concerns. The technology offers too many benefits, and competitive pressures make adoption inevitable. As Arellano notes, the genie will not return to the bottle.

Security teams must instead focus on practical steps to improve their posture. Encouraging application developers to exercise greater diligence represents a starting point. Scanning AI-generated code for vulnerabilities before deploying applications to production environments can catch many issues before they create AI risk.

Organizations should establish clear processes for involving security teams early in AI projects. When security professionals participate in design discussions, they can guide architects toward secure implementations and identify potential issues before development begins.

Creating comprehensive inventories of AI components and LLM usage across the organization provides essential visibility. Security teams cannot protect systems they do not know exist. Regular audits and discovery processes help maintain awareness as new AI applications appear.

Developing specialized skills for AI security will prove critical. Security teams need to understand how prompt injection attacks work, how to secure APIs that invoke LLMs, and how to validate AI model outputs. Training programs and certifications focused on AI security can help build these capabilities.

Organizations should also foster better collaboration between development and security teams. Reframing security as an enabler of safe innovation rather than an obstacle can reduce friction and encourage developers to engage security professionals proactively.

Preparing for Escalating Threats

Cybersecurity teams should prepare for the worst while working to improve their defenses. The survey data suggests that AI-related security incidents will continue to increase in both frequency and severity. Organizations need incident response plans specifically designed to address AI application compromises.

These plans should account for the unique characteristics of AI incidents. For example, a successful prompt injection might not leave traditional forensic evidence, requiring security teams to analyze LLM logs and model behavior patterns to understand what occurred.

Organizations should also establish clear policies governing AI usage. These policies can help prevent shadow AI by creating approved pathways for teams to access AI capabilities while maintaining security oversight. When employees can easily access sanctioned AI tools, they have less incentive to adopt unauthorized solutions.

Regular security assessments of AI applications can help identify weaknesses before attackers discover them. These assessments should evaluate both the AI components themselves and the surrounding infrastructure that supports them, including APIs, data stores, and integration points.

The path forward requires balancing innovation with security. Organizations cannot allow security concerns to completely block AI adoption, as that would sacrifice competitive advantages and business value. However, they also cannot rush forward without considering the risks these systems introduce.

The survey results make clear that most organizations currently lack adequate security practices for AI applications. Closing this gap requires commitment from leadership, investment in tools and training, and cultural change that brings development and security teams into better alignment. Organizations that successfully navigate these challenges will position themselves to capture AI's benefits while managing its risks effectively.

Frequently Asked Questions

Prompt injections targeting large language models represent the most prevalent threat at 76%, followed by vulnerable LLM-generated code at 66% and LLM jailbreaking at 65%. These attacks manipulate AI inputs to extract sensitive information, exploit code weaknesses, or bypass safety controls. Organizations must prioritize defenses against these three attack vectors to protect their AI systems effectively.

Shadow AI refers to the unauthorized deployment of AI tools without IT oversight, similar to shadow IT but potentially more dangerous. Seventy-five percent of security professionals expect shadow AI to eclipse previous shadow IT security issues because 63% cannot identify where LLMs operate within their organizations. This lack of visibility prevents security teams from protecting systems they cannot see, creating substantial gaps in organizational security posture.

Only 34% of security teams learn about AI projects before developers begin building applications, while 53% receive notification before production deployment. Alarmingly, 14% only discover new AI applications after deployment or following a security incident. This late involvement severely limits security teams’ ability to influence architecture decisions and prevent vulnerabilities from the start.

Most LLMs used for code generation were trained on examples scraped from across the web, which inevitably includes flawed implementations containing security vulnerabilities. When LLMs generate code based on these patterns, they reproduce similar weaknesses. Without established best practices for governing AI coding tools, increased code generation directly translates into more vulnerabilities that security teams must identify and remediate.

Organizations should scan AI-generated code for vulnerabilities before production deployment, involve security teams early in AI project design, and create comprehensive inventories of AI components and LLM usage. Establishing clear policies governing AI usage helps prevent shadow AI by providing approved pathways with security oversight. Regular security assessments of AI applications and specialized training for security teams in AI-specific threats are also essential.

Get started.

It’s easy to start ensuring regulatory compliance and effectively managing risk with Kiteworks. Join the thousands of organizations who are confident in how they exchange private data between people, machines, and systems. Get started today.

Table of Content
Share
Tweet
Share
Explore Kiteworks