Explore Elite Risk Management Services

Private Strategic Group
SolutionsCapabilitiesPlatformsInsightsIntelligenceAbout

Search

Intelligence Analysis

Growth of Shadow AI Underscores Need for AI Governance

12 AUG 2025

/

3 min read


data leak concept image

As generative AI (GenAI) platforms gain widespread popularity, employees are increasingly leveraging them to enhance productivity in areas such as content creation, data analysis, and code generation and debugging, among others. However, employees may use these tools outside of formal governance guardrails without understanding the associated risks these tools pose. This growing trend, of using AI technologies with approval or oversight of cybersecurity teams, is called Shadow AI.  

Compliance Risks

Employees may submit confidential, proprietary, or client data into an AI platform. Such data may be retained or used in external model training, resulting in privacy breaches or non-compliance with regulatory frameworks, such as the European Union's General Data Protection Regulation (GDPR) and the US Health Insurance Portability and Accountability Act (HIPAA) and Service Organizational Controls 2 (SOC2).  

Increased Vulnerability to Cyber Attacks

Moreover, many rapidly developed applications – specifically "wrapper" tools, which are third-party apps built on public AI platforms – often bypass formal security assessments and lack essential protections like encryption. These gaps significantly expand an organization's attack surface, creating vulnerabilities that traditional security tools are not designed to detect or defend against. Most existing monitoring tools are not equipped to detect shadow AI activity. If such usage goes undetected, it cannot be controlled or remediated.  

The Need for AI Governance – Not Bans

Issuing blanket bans on AI tools will not stop their use; employees will likely adopt a more covert approach, thereby compounding concerns around Shadow AI. Therefore, organizations need to adopt sustainable, risk-aware AI governance policies that should integrate with existing governance and compliance architectures. 

Once the policies are in place, employees can be provided access to enterprise-approved AI platforms with controls, such as data redaction, prompt logging, and data loss prevention, to ensure visibility into employee actions and protect sensitive data. 

As with most areas of cybersecurity, the greatest risks lie with user error, and the most effective control remains employee education. Even the most advanced controls cannot compensate for a poorly informed workforce. Organizations should train employees on the risks of using unapproved AI applications, which tools are approved for use, and what types of data may be safely entered into these applications. Training must be ongoing and evolve along with AI platforms and the governance landscape. 

Shadow AI is a rapidly growing governance challenge for security, legal, and compliance teams. Organizations must rely on internal governance to ensure AI serves as a competitive asset instead of an unmanaged liability. Businesses that develop structured AI governance regimes and invest in ongoing employee education and training are best placed to maximize the use of AI tools while keeping sensitive data secure. 


Learn more about leveraging our industry-leading regional and subject matter experts for intelligence that helps your organization stay ahead of risks to your people and operations.       

Sharpen your 
view of risk

Subscribe to our newsletter to receive our analysts’ latest insights in your inbox every week.