Security teams are well-versed in managing insider threats. These threats come from trusted individuals with legitimate system access who exploit that trust, whether through malicious intent or reckless behavior.
According to the UK Cyber Security Breaches Survey 2025, insider threats contributed to 50% of UK businesses experiencing a cyber breach or attack in the past 12 months.
Now, AI agents represent a new type of insider on the horizon. Without proper consideration, these digital entities are poised to become the ultimate agents of chaos within existing authorization frameworks.
What Works for Humans Doesn’t Always Work for Agents
Authorization (AuthZ) systems manage users’ access to resources, ensuring that people can only perform the actions they’re supposed to. However, most AuthZ systems weren’t built to stop everything users might attempt because they were designed with the expectation that external factors would constrain human misbehavior.
This is why over-provisioning access to users is common and has traditionally been manageable. When someone joins a company, it’s simpler to copy an existing set of permissions to their account rather than carefully consider minimal access rights. This approach works because humans understand context, but AI agents have no such awareness.
Agentic AI systems operate with the same trusted access as human users, but without the social constraints, fear of consequences or common sense that typically keep humans from overstepping boundaries. While a human employee might hesitate before accessing sensitive data they don’t need, an AI agent will optimize for efficiency and exploit every permission it’s been granted in pursuit of its goals.
This creates a perfect storm for AuthZ systems that were designed around human behavior patterns. AI agents require a new approach.
Three Ways Security Teams Can Minimize Agentic AI Chaos
Responsible governance can limit the chaos that agentic AI may cause within their AuthZ systems by focusing on three key areas:
Implement Composite Identities
Current authentication (AuthN) and AuthZ systems cannot distinguish between human users and AI agents. When AI agents take actions, they operate under human identities or use access credentials based on human-centric permission models.
This complicates simple questions like, who authored this code? Who initiated this merge request? Who created this Git commit?
It also creates accountability gaps for questions such as, who told the AI agent to create this code? What context did the agent need to build it? What resources did the AI have access to?
Composite identities solve this problem by linking an AI agent’s digital identity with the human user instructing it. When an AI agent attempts to access a resource, the system can authenticate and authorize both the agent and its human operator, creating a complete audit trail.
This approach maintains accountability while enabling organizations to set more granular permissions based on the specific human-AI pairing.
Deploy Comprehensive Monitoring Frameworks
Operations, development and security teams need ways to keep track of the activities of AI agents across several workflows, processes and systems. It’s not sufficient to know what an agent is doing in a codebase, for instance, teams also need to be able to keep an eye on its activity in the staging and production environments, in associated databases and in any applications it might have access to.
Organizations should consider using Autonomous Resource Information Systems (ARIS) that mirror existing Human Resource Information Systems (HRIS). These frameworks maintain profiles of autonomous agents, document their abilities and specializations and manage their operational boundaries.
We can see the beginnings of these technologies in LLM data management systems like Knostic, but the field is rapidly evolving.
Establish Transparency and Accountability Structures
Even with sophisticated monitoring frameworks, organizations must maintain clear accountability structures for autonomous AI agents. This means establishing policies that require disclosure when AI tools are being used and designating individuals responsible for agent oversight.
Regular human review of agent actions and outputs is essential, but more importantly, organizations need clear escalation procedures when agents overstep their boundaries.
This accountability structure should include audits of agent permissions, review of unusual behavior patterns and established playbooks for rapidly revoking or modifying agent access when problems arise.
Responsible Agent Deployment
In many cases, the use of AI agents will lead to remarkable innovations and breakthroughs. They will also force teams to reimagine the structure of current AuthZ systems.
This form of disruption is not unprecedented. The shift to cloud computing similarly challenged existing security frameworks, forcing organizations to develop new approaches to identity management, network security, and data protection.
Security often follows innovation, and success requires learning to strike a balance. Facing this transformation head-on ensures AI agents deliver on their promise of productivity without becoming agents of chaos.