A new industry study by Kiteworks reveals a critical gap between perceived and actual AI security readiness across organizations. The research, which surveyed 461 cybersecurity and IT professionals, found that only 17% of companies have automated technical controls to prevent employees from uploading sensitive data to AI tools like ChatGPT. The remaining 83% rely on ineffective human-dependent measures such as training sessions, warning emails, or written guidelines—with 13% having no policies at all. This security deficit is compounded by a dangerous overconfidence gap, where 33% of executives claim comprehensive AI usage tracking while independent studies show only 9% have functioning governance systems.

The scope of data exposure is alarming, with 27% of organizations admitting that over 30% of information sent to AI tools contains private data including Social Security numbers, medical records, credit card information, and trade secrets. Another 17% have no visibility into what employees share with AI platforms. The proliferation of “shadow AI”—unauthorized tools downloaded by employees—creates thousands of invisible data leakage points. With 86% of organizations blind to AI data flows and the average company having 1,200 unofficial applications, sensitive information routinely flows into AI systems where it becomes permanently embedded in training models, potentially accessible to competitors or malicious actors.

Regulatory compliance presents another critical challenge as enforcement accelerates. U.S. agencies issued 59 AI regulations in 2024, more than double the previous year, yet only 12% of companies list compliance violations among their top AI concerns. Current practices violate specific provisions of GDPR, CCPA, HIPAA, and SOX regulations daily. Without proper visibility into AI interactions, organizations cannot respond to data deletion requests, maintain required audit trails, or demonstrate compliance during regulatory reviews. The median remediation time for exposed credentials stretches to 94 days, giving attackers months to exploit leaked access.

The report identifies four urgent actions organizations must take: conduct honest audits of actual AI usage to close the 300% overconfidence gap; deploy automated technical controls as human-dependent measures consistently fail; establish unified data governance command centers to track all AI-related data movements; and implement comprehensive risk management with real-time monitoring across all platforms. The convergence of explosive AI adoption, surging security incidents, and accelerating regulation creates a rapidly closing window for action. Organizations that fail to secure their AI usage now face significant regulatory penalties, reputational damage, and competitive disadvantage as sensitive data shared today becomes permanently embedded in AI systems.

Additional Resources

 

Get started.

It’s easy to start ensuring regulatory compliance and effectively managing risk with Kiteworks. Join the thousands of organizations who are confident in how they exchange private data between people, machines, and systems. Get started today.

Share
Tweet
Share
Explore Kiteworks