Shadow AI Threatens Corporate Security: 20% of Firms Face Costly Breaches, Urgent Governance Needed
August 24, 2025
Shadow AI creates blind spots for security teams, with an average of 66 different GenAI apps in use per enterprise, often without awareness or control.
Implementing a zero-trust approach, trusting only approved tools and devices, can significantly reduce security vulnerabilities.
Employees frequently use AI for drafting emails and basic analysis, though most still prefer human oversight for critical tasks, reflecting growing comfort with AI-assisted work.
Organizations can detect shadow AI activities using AI-powered cybersecurity tools, cloud access security brokers, and systems analyzing user behavior.
Companies should allocate budgets for AI security, balancing automation with human oversight to prevent data leaks and security breaches.
While preventing staff from using AI tools entirely is impossible, safeguards and policies are essential to minimize risks and maximize productivity.
Technological solutions combined with behavioral training and policy enforcement are vital to mitigate data leaks and privacy violations caused by shadow AI.
Third-party plugins for GenAI platforms, such as ChatGPT, introduce vulnerabilities, with examples of security flaws found in March 2024, emphasizing the need for vetting and security controls.
The use of shadow AI tools like ChatGPT, Claude, and AI image generators has surged globally, with incidents of data loss doubling and accounting for 14% of all data loss prevention incidents in 2025.
Vulnerable data includes demographic details and anonymized datasets that can be decoded to reveal private information, increasing privacy breaches.
Future efforts should focus on developing flexible security frameworks, privacy safeguards, and comprehensive regulations to keep pace with rapid AI advancements.
India leads in GenAI adoption, with 92% of workers using these tools often without organizational approval, driven by user demand and policy gaps.
To regulate AI tool usage, companies should enforce policies with compliance checks, penalties, and legal measures to ensure accountability amid evolving AI landscapes.
A recent cybersecurity report indicates that 20% of surveyed companies experienced data breaches linked to shadow AI, with the average breach cost in Canada rising to nearly $7 million.
Employees are increasingly turning to unauthorized AI tools, known as shadow AI, which pose significant cybersecurity and confidentiality risks for organizations.
Raising awareness through training and education about the dangers of shadow AI is crucial to reduce unsafe practices and ensure accountability.
Cybersecurity incidents related to shadow AI are on the rise, highlighting the growing risks and financial impact, as 20% of companies report data breaches associated with these tools.
Experts recommend establishing governance frameworks, including an AI committee and guardrails like a zero-trust approach, to manage AI use and mitigate associated risks.
Some companies deploy internal chatbots to control data, but cybersecurity experts warn these systems are vulnerable if not properly secured, as demonstrated by a researcher accessing sensitive information within 47 minutes.
Security incidents involving shadow AI have led to increased breach costs globally, with an average additional expense of $200,000 and approximately Rs 18 million in India, mainly due to delayed detection and containment.
Many organizations lack proper governance policies for AI, with 63% not having policies for shadow AI management and 97% of AI-related breaches occurring without proper controls.
In May 2023, Samsung revealed sensitive source code uploaded to ChatGPT, highlighting the risk of data leaks through shadow AI use.
Despite heavy investments in AI, only 5% of companies see significant returns, while over 90% of employees use personal AI tools for work, exposing a disconnect between policy and practice.
Without proper governance, security measures, and training, AI's productivity benefits can be overshadowed by significant threats to business integrity and security.
AI tools can produce inaccurate information and infringe on copyrights, increasing organizational risks related to misinformation and legal issues.
Summary based on 14 sources
Get a daily email with more AI stories
Sources

CityNews Toronto • Aug 24, 2025
Businesses put at risk when employees use unauthorized AI tools at work
Winnipeg Free Press • Aug 24, 2025
Businesses put at risk when employees use unauthorized AI tools at work
St. Albert Gazette • Aug 24, 2025
Businesses put at risk when employees use unauthorized AI tools at work
St. Albert Gazette • Aug 24, 2025
Businesses put at risk when employees use unauthorized AI tools at work