Many employees now turn to generative AI not just for answers, but for help summarising documents, drafting content, analysing datasets, and writing code.
It has become a useful assistant, embedded into everyday work. But what is now coming into focus is the security impact of this shift. As GenAI use accelerates, so too does the amount of sensitive information flowing through it. The result is a long list of GenAI-related data exposure incidents that have increased nearly fivefold year-on-year, creating a volume and velocity of leakage that enterprises have never had to manage before.
data leaks are the natural consequence of deploying technology that ingests data faster than most security models can handle. But as GenAI becomes the default way work gets done, every enterprise must understand that a tool built for speed and convenience has fundamentally altered their risk landscape.
GenAI Adoption is Outpacing Governance
Over the past year, the number of people using SaaS-based GenAI applications like ChatGPT and Gemini has tripled, while the volume of prompts sent to those tools has increased sixfold. In some organizations, that translates into tens of thousands—or even millions—of prompts every month.
However, GenAI usage no longer fits the assumptions most enterprise security models were built around. Traditional controls are typically designed for browser-based access, clearly defined applications, and policy enforcement at known boundaries. GenAI breaks those assumptions.
For GenAI to be useful, these it must be given context. That often means employees feeding models with internal material, whether content, operational data, or customer information, as part of everyday work. But that also means sensitive data is routinely moved into environments where traditional monitoring and protection controls don’t often apply, increasing the risk of unintended exposure or downstream misuse.
Increasingly, AI is being accessed and integrated through APIs, embedded directly into internal tools, workflows, and automation pipelines. Today, around 70% of organizations connect to GenAI services through programmatic interfaces rather than traditional browser sessions, shifting large volumes of AI activity outside the places security teams have historically focused controls on.
But when these controls introduce friction or limit functionality, users adapt, switching tools, accounts, or interfaces to keep work moving. In fact, 47% of GenAI users still rely on personal or unmanaged AI applications, with many moving back and forth between personal and enterprise accounts.
Even when organizations provide approved tools, this switching creates blind spots that make consistent oversight difficult. These blind spots can be exploited, whether by attackers looking for easier entry points or by accidental leakage that goes unnoticed until too late.
For security teams, the strain around GenAI is already visible. The average organization now sees around 223 incidents of users violating GenAI data policies every month; a reflection of how hard it is to establish guardrails around a technology that is spreading faster than policies, processes, and monitoring can keep up.
The challenge enterprises have is finding a way to shore up defenses, but without slowing the productivity gains that made the technology indispensable in the first place.
Why Security Must Follow the User, Not the Interface
Thankfully, there are clear practices and solutions enterprises can adopt to reduce exposure and regain visibility as GenAI use continues to expand. That is in the form of a sustainable security model that focuses less on where GenAI is accessed, and more on who is using it, how they are using it, and what data is involved.
That means applying zero trust principles to GenAI interactions: verify identity and context and apply consistent controls to every transaction, not just the ones that happen in the browser. Because understanding which identity—human or system—is interacting with a GenAI service is key, particularly as AI becomes more deeply integrated into internal workflows. Identity provides the anchor point for consistent policy enforcement, regardless of interface or access method.
Equally important is behaviour. GenAI risk is not static; it changes based on how frequently tools are used, what types of data are shared, and how those patterns evolve over time. Security teams need the right solutions that provide visibility into usage behaviours that signal increased exposure, such as sudden spikes in data uploads or repeated interactions involving sensitive information, rather than relying solely on binary controls. In practice, it means visibility must extend across both inline traffic and API-based access, as GenAI is increasingly embedded into tools and workflows rather than accessed through a single interface.
That visibility needs to be paired with data protection that travels with the data itself. Because GenAI relies on ingesting information, organizations need controls that can continuously identify and protect sensitive material as it moves into AI systems. This is where Data Loss Prevention (DLP) becomes essential, not as a blunt blocking mechanism, but to recognise sensitive content in context and apply guardrails that distinguish acceptable use from genuine risk.
None of this works without clear, enforceable policies. Allow-and-block policies can still play an important role in reducing unnecessary exposure, particularly for GenAI tools that don’t serve a legitimate business purpose or introduce disproportionate risk. But policies must be flexible enough to support legitimate GenAI use, while still providing human oversight and accountability when sensitive data is involved.
These practices highlight that securing GenAI is now about aligning identity, behaviour, and data protection so security follows user intent and data movement – wherever and however GenAI is used.
Where GenAI Security Goes Next
GenAI is already here and it is reshaping how work gets done, and with it, how data moves through the enterprise. Treating it like a controlled experiment or a niche risk only widens the gap between how people work and how security is applied.
Closing that gap requires security teams to rethink long-held assumptions and to build controls that move at the same pace as the workforce they are meant to protect.