Advanced AI tools, including large language models (LLMs), are beginning to demonstrate their promise to enhance operational efficiency in businesses. A 2025 McKinsey survey found that over three-quarters of firms use AI in at least one business function, with 71% regularly using generative AI.

Integrating AI capabilities into operations, from data analysis to generating reports, is no longer a nice to have but an essential component for competing in today’s marketplace.

However, there is a major security and privacy issue that threatens to derail the benefits offered by these technologies – shadow AI.

Shadow AI relates to the use of AI tools and applications by employees outside the visibility and approval of the organization’s IT department.

These tools include public LLMs such as Google’s Gemini and OpenAI’s ChatGPT, as well as independent software-as-a-service (SaaS) AI applications. Employees who input sensitive business and personal data into these models are unwittingly exposing their organization to significant security, privacy and regulatory risks.

IBM’s Cost of a Data Breach Report 2025 found that 20% of organizations have staff members using unsanctioned AI tools that are also unprotected. A 2024 report from RiverSafe observed that one in five UK companies has had potentially sensitive corporate data exposed via employee use of generative AI.

Many organizations have responded to these risks by placing partial or full bans on specific AI tools. However, issuing bans is an undesirable, and often ineffective, approach to addressing shadow AI use.

Instead, organizations must establish appropriate processes and measures to reduce the security risks of unmanaged AI tools, while ensuring employees are able to leverage the capabilities that these technologies offer.

Why Shadow AI Is the Next Big Governance Challenge for CISOs - Infosecurity Magazine

The Shadow AI Threat

Security and Privacy Risks of Shadow AI

The security and privacy risks of shadow AI revolve around the fact that IT teams lose the ability to know where their organization’s data is, how it is being used and whether it is being properly protected.

Google research found that 77% of UK cyber leaders believe generative AI has contributed to a rise in security incidents. The main types of risks were cited as inadvertent data leakage through interactions with LLM chatbots and hallucinations.

Anton Chuvakin, security advisor, Office of the CISO at Google Cloud, told Infosecurity: “When employees paste confidential meeting notes into an unvetted chatbot for summarization, they may unintentionally hand over proprietary data to systems that could retain and reuse it, such as for training. Without visibility into such usage, security teams face the difficult task of protecting assets they can’t see or control.”

Dan Lohrmann, field CISO at Presidio, explained that this reality means organizations are unable to prove to employees, clients, shareholders and regulators that compliance and contractual requirements around data protection are being met.

“Ultimately, poorly managed shadow AI can lead to data breaches, bad business decisions, legal issues, more security incidents and poor business results,” he noted.

Another risk lies in the fact that organizations can no longer be sure that decisions made by staff are based on AI tools that have been properly trained with the right datasets.

If an AI system is ineffectively trained or uses poor datasets AI hallucinations can occur. These hallucinations see AI generate information that appears credible but is entirely false.

How Shadow AI Presents a Different Challenge to Shadow IT

In many respects, shadow AI is a subset of a broader shadow IT problem. Shadow IT is an issue that emerged more than a decade ago, largely emanating from employee use of unauthorized cloud apps, including SaaS.

Lohrmann noted that cloud access security broker (CASB) solutions were developed to deal with the shadow IT issue. These tools are designed to provide organizations with full visibility of what employees are doing on the network and on protected devices, while only allowing access to authorized instances.

However, shadow AI presents distinct challenges that CASB tools are unable to adequately address.

“Organizations still need to address other questions related to licensing, application sprawl, security and privacy policies, procedures and more. There are also training considerations, product evaluations and business workflow management to consider,” Lohrmann noted.

A key difference between IT and AI is the nature of data, the speed of adoption and the complexity of the underlying technology.

In addition, AI is often integrated into existing IT systems, including cloud applications, making these tools more difficult to identify.

Chuvakin added, “With shadow IT, unauthorized tools often leave recognizable traces – unapproved applications on devices, unusual network traffic or access attempts to restricted services. Shadow AI interactions, however, often occur entirely within a web browser or personal device, blending seamlessly with regular online activity or not leaving any trace on any corporate system at all.”

Why Banning AI Tools is Not the Answer

A number of organizations have attempted to impose full or partial bans on generative AI. Famously, it was reported in 2023 that Samsung banned the use of generative AI tools in a key division after staff on separate occasions shared sensitive data, including source code and meeting notes, with ChatGPT.

However, this is not a viable strategy for a number of reasons, one of which is that AI is now a key business accelerator and not leveraging such technologies could result in commercial disadvantages.

Diana Kelley, CISO at AI security firm Noma Security, noted: “No company wants to succumb to the risk of no longer being competitive in the market.”

In addition to this business reality, bans on publicly facing AI models are also likely to be ineffective. Chuvakin described such an approach as little more than “security theatre,” in a modern, distributed workforce. In fact, bans could simply serve to drive AI usage underground on less secure networks, even further outside of the IT team’s vision.

“Today’s multi-modal AI models are running on employee’s phones anyway, and can easily discern images from screens without any connection to the employer's systems,” Chuvakin said.

“If you ban AI, you will have more shadow AI and it will be harder to control"

“If you ban AI, you will have more shadow AI and it will be harder to control,” he added.

How to Develop an Effective Shadow AI Strategy

It is vital that security leaders design an effective strategy to tackle shadow AI, while ensuring that employees are still able to leverage the benefits that tools like generative AI offer, providing an incentive to adhere to the policies in place.

Identify and Approve AI Tools Quickly

The first stage is to discover what AI tools are in use across the enterprise, who is using them and what data they access.

There are a number of specialist vendor tools on the market that can be used for this purpose.

Security teams should use this information to determine actions such as monitoring data flows to sanctioned apps, and when necessary, block access to unsanctioned tools.

Chuvakin said that actions should differentiate between consumer-grade and enterprise-grade AI, approving only those that meet robust security criteria.

“Key safeguards include encryption by default, data residency controls, prevention of training on customer data, and access governance aligned with the principle of least privilege,” he commented.

Employees using unsanctioned tools should be pointed towards more secure alternatives that are easy to access within the organization.

Chuvakin emphasized that an exception process is also important, whereby it is determined that low-risk uses for AI can be served by a consumer-grade tool.

This should provide a clear and simple path to get particular tools approved to avoid the risk of driving their use underground.

Kelley recommended that security teams work with business leaders on developing operating procedures and instructions that give detailed specifics on company-approved AI use.

“Clear guardrails not only prevent misuse but also build employee confidence in knowing what’s safe, legal and compliant,” she noted.

Introduce Safeguards for AI Tools

For approved AI tools, ongoing safeguards against misuse are required. Many of these will be the same core measures used for traditional IT, such as access control and compliance monitoring. However, these approaches must be adjusted to account for AI’s unique risks.

“For example, monitoring both inputs and outputs for security and safety, as well as compliance, comes to the fore,” Chuvakin noted.

This includes automated checks to flag sensitive data before it leaves the organization and integrating data loss prevention (DLP) systems into AI tools.

Kelley noted that measures such as runtime monitoring and policy enforcement are especially important for AI agents, as their actions often “drift” due to their autonomous nature.

These insights should be used to refine policies and controls over time, she advised.

“Inputs and outputs to AI should be logged for forensic purposes, and agent activity – including all API calls, file operations and backend store access – must be captured and reviewed. Agent access patterns should be baselined and deviations flagged for investigation. Also, implement ongoing oversight of AI activity, including periodic audits of outputs and usage patterns,” Kelley said.

Employee Training and Awareness

Employee training must also be updated to address the unique challenges of shadow AI.

“Employees need to understand what qualifies as shadow AI, the risks involved, and the approved alternatives,” Chuvakin explained.

However, this should be a two-way street, with organizations taking the time to understand how and why employees want to use AI – and updating their guardrails appropriately.

Staff need to view the security team as an enabler, rather than a blocker, when it comes to AI adoption, a culture which is essential to ensure adherence to policies.

“Regularly communicating successes, sharing lessons learned from pilot projects, and providing quick-response support when issues arise will encourage responsible experimentation and help AI become a trusted, productive part of the business,” Kelley advised.

Conclusion

Shadow AI presents a novel data security and privacy risk to enterprises across all industries. Organizations should leverage prior experiences of managing shadow IT to address this issue but also recognize the distinct challenges of shadow AI.

Banning AI tools such as LLMs is not a viable solution. Instead, specific visibility, technical monitoring, policies and training approaches need to be developed that facilitate the safe employee use of AI. This includes a lightweight and clear approvals process, easy access to enterprise-approved tools and automated solutions that can rapidly flag risky inputs and outputs.