Intelligence Analysis
Growth of Shadow AI Underscores Need for AI Governance
12 AUG 2025
/
3 min read
Author
Cyber Intel Lead

As generative AI (GenAI) platforms gain widespread popularity, employees are increasingly leveraging them to enhance productivity in areas such as content creation, data analysis, and code generation and debugging, among others. However, employees may use these tools outside of formal governance guardrails without understanding the associated risks these tools pose. This growing trend, of using AI technologies with approval or oversight of cybersecurity teams, is called Shadow AI.
Compliance Risks
Employees may submit confidential, proprietary, or client data into an AI platform. Such data may be retained or used in external model training, resulting in privacy breaches or non-compliance with regulatory frameworks, such as the European Union's General Data Protection Regulation (GDPR) and the US Health Insurance Portability and Accountability Act (HIPAA) and Service Organizational Controls 2 (SOC2).
Increased Vulnerability to Cyber Attacks
Moreover, many rapidly developed applications – specifically "wrapper" tools, which are third-party apps built on public AI platforms – often bypass formal security assessments and lack essential protections like encryption. These gaps significantly expand an organization's attack surface, creating vulnerabilities that traditional security tools are not designed to detect or defend against. Most existing monitoring tools are not equipped to detect shadow AI activity. If such usage goes undetected, it cannot be controlled or remediated.
The Need for AI Governance – Not Bans
Issuing blanket bans on AI tools will not stop their use; employees will likely adopt a more covert approach, thereby compounding concerns around Shadow AI. Therefore, organizations need to adopt sustainable, risk-aware AI governance policies that should integrate with existing governance and compliance architectures.
Once the policies are in place, employees can be provided access to enterprise-approved AI platforms with controls, such as data redaction, prompt logging, and data loss prevention, to ensure visibility into employee actions and protect sensitive data.
As with most areas of cybersecurity, the greatest risks lie with user error, and the most effective control remains employee education. Even the most advanced controls cannot compensate for a poorly informed workforce. Organizations should train employees on the risks of using unapproved AI applications, which tools are approved for use, and what types of data may be safely entered into these applications. Training must be ongoing and evolve along with AI platforms and the governance landscape.
Shadow AI is a rapidly growing governance challenge for security, legal, and compliance teams. Organizations must rely on internal governance to ensure AI serves as a competitive asset instead of an unmanaged liability. Businesses that develop structured AI governance regimes and invest in ongoing employee education and training are best placed to maximize the use of AI tools while keeping sensitive data secure.
Learn more about leveraging our industry-leading regional and subject matter experts for intelligence that helps your organization stay ahead of risks to your people and operations.
Related
Tags
Sharpen your
view of risk
Subscribe to our newsletter to receive our analysts’ latest insights in your inbox every week.
Intelligence & Insights
Intelligence
Worth Gathering
Employing a team of 200+ analysts around the world, Crisis24 is the only source you need for on-point, actionable insights on any risk-related topic.

Intelligence Analysis
Wildland-Urban Interface Expansion Increases Risk and Impacts of Wildfires
Wildland-Urban Interface (WUI) growth heightens wildfire risks worldwide. Discover expert insights to protect your people, assets, and operations.
By Elizabeth Yin
September 25, 2025

Case Study
Crisis24 Mass Notification Delivers Out-of-Band Communications Resilience for Aerospace Company
An aerospace company adopts Crisis24 Mass Notification for Out-of-Band Communications resilience.
September 24, 2025

Intelligence Analysis
Renewed Interest in Libya’s Oil Sector Sparks Economic Hope amid Operational Challenges
Growing international interest in Libya's oil sector brings hope and challenges for a renewal and stabilization of the country’s economy.
By Dyna Faid
September 17, 2025

eBook
Empowering Resilience with Responsible AI
This eBook offers actionable strategies for the responsible use of AI tools that enhance threat detection, crisis communications and recovery efforts for security teams.
By Chris Hurst, Vice President, AI & Innovation
September 17, 2025