The Open Worldwide Application Security Project (OWASP) has published new practical guidance for securing agentic AI applications powered by large language models (LLMs).
The comprehensive guidance, published on July 28, focuses on concrete technical recommendations for builders and developers of AI agents, including AI/ML engineers, software developers, security professionals and AppSec pros.
“As AI systems evolve toward more autonomous, tool-using, and multi-agent architectures, new security challenges emerge that traditional AppSec can’t handle alone. That’s why the OWASP Gen AI Security Project has published the Securing Agentic Applications Guide v1.0, the most comprehensive and actionable open source security resource yet for Agentic AI developers and defenders,” OWASP wrote on a LinkedIn post.
The new resource has been developed in response to surging use of AI agents in organizations.
AI agents operate with a high degree of autonomy, including the ability to pass data or results to another AI tool.
These tools operate at a quicker pace than earlier-generation systems based on LLMs and work without the need for a human to give them prompts.
They are also able to adapt dynamically to changing environments without human intervention.
This lack of human oversight has created significant security concerns, especially when agentic AI applications operate in areas such as writing code and configuring systems.
Experts have also warned that the technology will help cybercriminals automate more elements of cyber-attacks, such as account takeovers.
Read now: OWASP Warns of Growing Data Exposure Risk from AI in New Top 10 List for LLMs
Agentic AI Security Focus Areas
The OWASP guidance covers security across the full agentic AI development and deployment lifecycle.
- Securing agentic architectures: The guidance emphasizes the need to embed security within the architecture itself, including strong user privilege and authentication controls, such as prompting the user for credentials when executing tasks requiring interaction with a user’s browser or computer
- Design and development security: This section focuses on measures to prevent manipulation and unintended behaviors from agentic AI models, with clear safeguards put in place at the design stage, such as instructing the model to be wary of attempts to override its core instructions
- Enhanced security actions: Organizations are encouraged to incorporate extra security tools and measures into systems to mitigate the risks posed by AI agents, such as utilizing OAuth 2.0 for permissions and authorization, using managed identity services to avoid storing credentials and encrypting sensitive data
- Tackling operational connectivity risks: This section contains measures to address security risks arising from connecting agentic AI applications with other systems such as APIs, databases and code interpreters
- Supply chain security: Organizations should limit the risks from third-party code being incorporated into agentic AI by taking steps such as managing permissions to the data sources it is running in and scanning for third-party package vulnerabilities within the code
- Assuring agentic applications: The guidance advocates for regular red teaming exercises to identify vulnerabilities and possible attack vectors in agentic systems
- Securing deployments: Several security measures should be deployed in production environments to secure AI agents, including rigorous checks in CI/CD pipelines
- Runtime hardening: Security teams should combine traditional virtual machine security hardening with agentic-specific security controls like sandboxing, auditability and runtime behavioral monitoring