Intelligence Analysis
Growth of Shadow AI Underscores Need for AI Governance
12 AUG 2025
/
3 min read
Author
Cyber Intel Lead

As generative AI (GenAI) platforms gain widespread popularity, employees are increasingly leveraging them to enhance productivity in areas such as content creation, data analysis, and code generation and debugging, among others. However, employees may use these tools outside of formal governance guardrails without understanding the associated risks these tools pose. This growing trend, of using AI technologies with approval or oversight of cybersecurity teams, is called Shadow AI.
Compliance Risks
Employees may submit confidential, proprietary, or client data into an AI platform. Such data may be retained or used in external model training, resulting in privacy breaches or non-compliance with regulatory frameworks, such as the European Union's General Data Protection Regulation (GDPR) and the US Health Insurance Portability and Accountability Act (HIPAA) and Service Organizational Controls 2 (SOC2).
Increased Vulnerability to Cyber Attacks
Moreover, many rapidly developed applications – specifically "wrapper" tools, which are third-party apps built on public AI platforms – often bypass formal security assessments and lack essential protections like encryption. These gaps significantly expand an organization's attack surface, creating vulnerabilities that traditional security tools are not designed to detect or defend against. Most existing monitoring tools are not equipped to detect shadow AI activity. If such usage goes undetected, it cannot be controlled or remediated.
The Need for AI Governance – Not Bans
Issuing blanket bans on AI tools will not stop their use; employees will likely adopt a more covert approach, thereby compounding concerns around Shadow AI. Therefore, organizations need to adopt sustainable, risk-aware AI governance policies that should integrate with existing governance and compliance architectures.
Once the policies are in place, employees can be provided access to enterprise-approved AI platforms with controls, such as data redaction, prompt logging, and data loss prevention, to ensure visibility into employee actions and protect sensitive data.
As with most areas of cybersecurity, the greatest risks lie with user error, and the most effective control remains employee education. Even the most advanced controls cannot compensate for a poorly informed workforce. Organizations should train employees on the risks of using unapproved AI applications, which tools are approved for use, and what types of data may be safely entered into these applications. Training must be ongoing and evolve along with AI platforms and the governance landscape.
Shadow AI is a rapidly growing governance challenge for security, legal, and compliance teams. Organizations must rely on internal governance to ensure AI serves as a competitive asset instead of an unmanaged liability. Businesses that develop structured AI governance regimes and invest in ongoing employee education and training are best placed to maximize the use of AI tools while keeping sensitive data secure.
Learn more about leveraging our industry-leading regional and subject matter experts for intelligence that helps your organization stay ahead of risks to your people and operations.
Related
Tags
Sharpen your
view of risk
Subscribe to our newsletter to receive our analysts’ latest insights in your inbox every week.
Intelligence & Insights
Intelligence
Worth Gathering
Employing a team of 200+ analysts around the world, Crisis24 is the only source you need for on-point, actionable insights on any risk-related topic.

Case Study
Global Health Services Provider Engages Crisis24 for Executive Protection and Integrated Risk Management
A global health services provider chooses Crisis24 for executive protection, employee safety, and intelligence-driven risk management.
August 13, 2025

Article
Active Shooter Preparedness: The CSO Response to Modern Threats
The recent active shooter incident at 345 Park Avenue highlights the evolving threat of workplace violence and how modern CSOs are rethinking their approach to preparedness.
August 13, 2025

Intelligence Analysis
Red Sea Attacks Underscore Persistent Al-Houthi Threat
Shipping companies are likely to continue avoiding the Red Sea in the medium term amid the persisting Al-Houthi threat.
By Alex Watt, Danielle Marais
August 11, 2025

Case Study
Global Energy Company Enhances Duty of Care with Mass Notification from Crisis24
Global energy company relies on Crisis24 Mass Notification system to send location-based emergency alerts to protect employees and enhance duty of care.
August 6, 2025