The emerging adoption of AI agents, programs involving large language models (LLMs) and conduct automated or semi-automated tasks, brings its own set of new cybersecurity risks.

Sean Morgan, Chief Architect at Protect AI, a firm recently acquired by Palo Alto Networks, highlighted agentic AI’s main risks during the AI Summit at Black Hat, in Las Vegas, on August 5.

He divided those risks into three main categories:

  1. Corrupted context within an instruction to the AI agent
  2. Dynamic tool sourcing and supply chain risks
  3. Authentication and authorization mistakes throughout the AI agent’s control flow

Exploring the Top Cyber Threats Facing Agentic AI Systems - Infosecurity Magazine

Sean Morgan, Chief Architect at Protect AI, at the AI Summit at Black Hat, in Las Vegas, on August 5. Credit: Infosecurity Magazine

Top Three Cyber Risks of Using AI Agents

Context Corruption

Context corruption represents the most critical security risk for AI agents, Morgan said.

LLMs fundamentally struggle to distinguish between legitimate instructions and malicious interventions.

Morgan highlighted that these models are "very unreliable at determining what within the context is coming from the intended user."

This means an attacker can strategically inject instructions that rewrite entirely the agent's original purpose – similar to an SQL injection attack.

Additionally, Morgan noted that the vulnerability extends beyond simple text interactions. Context can be corrupted through multiple channels, including chat histories, long-term memory interactions, vector databases, document repositories and even outputs from other AI agents.

Hallucinations from one agent can become "ground truth" for another, creating cascading misinformation risks.

Real-world examples like the EchoLeak, a critical zero-click vulnerability in Microsoft 365 Copilot that can lead to the exfiltration of sensitive corporate data with a simple email, demonstrate how malicious emails or repository readme files can manipulate an AI agent's understanding and actions.

The complexity increases in multi-agent systems where interactions become exponentially more unpredictable.

An attacker could potentially create a chain of context manipulations that progressively alter an agent's behavior, making detection extremely challenging.

Dynamic Tool Sourcing and Supply Chain Risks

Dynamic tool sourcing introduces significant security complexities by allowing AI agents to select and combine tools to accomplish tasks autonomously.

The Model-Context Protocol (MCP) enables agents to mix available assets flexibly, but this flexibility creates substantial security vulnerabilities. An agent might unknowingly chain together tools in unintended ways, potentially creating accidental data exposure or exfiltration pathways.

The supply chain risks emerge from the potential interactions between different services and tools. A seemingly innocuous tool might include instructions that manipulate the agent's behavior or create backdoor access.

Morgan described "MCP bug holes" as an incident where a previously trusted service can suddenly change its behavior, exposing the entire system to unexpected risks. This dynamic nature means that security assessments become continuously challenging, as tool interactions can change in real-time.

These risks are hazardous because they exploit the core strength of AI agents, their ability to combine resources to solve problems autonomously.

An attacker could craft a sophisticated attack that appears to be a legitimate tool request but actually contains hidden malicious instructions.

Protect AI’s Morgan warned that threat modeling these interactions requires understanding not just individual tools, but their potential combinatorial effects.

Authentication and Authorization Complexity

The authentication and authorization landscape for AI agents represents an unprecedented challenge in cybersecurity.

As Morgan illustrated, these systems create a "ballooning system" where how to effectively secure AI agents from an identity and permissioning standpoint comes into question. 

The traditional linear authentication models break down when confronting multi-agent environments with their fluid, dynamic interactions.

In complex multi-agent systems, an orchestrating agent must navigate intricate permission landscapes.

Morgan explained: "You have a user who is working with an orchestrating agent. You need to present the identity of yourself as a user. The orchestrator itself likely needs to have some sort of identity, has to have the privileges in order to interact with these specialized sub-agents."

This means each interaction requires nuanced identity verification across multiple system layers.

The forensic challenge becomes particularly acute in understanding these permission transitions.

Morgan noted, a critical question emerges: "How do you go through each step of an orchestrating agent as it was building the control flow to say at this point it assumed this role, it had elevated permissions to do this aspect?"

This means tracking exactly when and why an agent changes its authorization state, which becomes exponentially more difficult as interactions become more complex.

The most significant risk lies in the unpredictability of these authorization changes. An agent might need to dynamically adjust its permissions across various sub-agents, accessing different data components and tools.

Ultimately, Morgan suggested that solving these authentication challenges requires "AI-specific security solutions" with "end-to-end visibility" – a comprehensive approach that can track and validate every permission transition in real-time.

The goal is to create a system where each agent's identity and authorization can be precisely mapped, monitored and controlled across increasingly complex multi-agent interactions.

How to Avoid Agentic AI’s Security Shortfalls

Finally, Morgan provided a few recommendations for people willing to develop agentic AI use cases securely:

  • Understand your internal and software-as-a-service (SaaS) agentic workloads
  • Maintain visibility and control of all instruction context
  • Develop protocols and systems that facilitate proper authn/authz control
  • Develop reliable validating methods for the selection by AI agents of tools and resources to use
  • Threat model your agentic deployments and SaaS solutions
  • Add AI security-specific end-to-end solutions
  • Stay sharp on the latest development and research on agentic AI security