A critical vulnerability found in Microsoft’s Copilot puts a focus on the growing security risks that come with new AI tools like agents and RAG (retrieval-augmented generation) that give AI systems – and possibly bad actors – greater access to sensitive corporate data.

In a report this week, threat intelligence researchers with Aim Security detailed the zero-click vulnerability, which they dubbed “EchoLeak,” that they wrote allows “attackers to automatically exfiltrate sensitive and proprietary information from M365 [Microsoft 365] Copilot context, without the user’s awareness, or relying on any specific victim behavior.”

“This attack chain showcases a new exploitation technique we have termed ‘LLM Scope Violation’ that may have additional manifestations in other RAG-based chatbots and AI agents,” the researchers with Aim Labs wrote. “This represents a major research discovery advancement in how threat actors can attack AI agents – by leveraging internal model mechanics.”

Zero-Click Flaw in Microsoft Copilot Illustrates AI Agent, RAG Risks

Zero-Click Flaw in Microsoft Copilot Illustrates AI Agent, RAG Risks

Aim contacted Microsoft about the security flaw – which is being tracked as CVE-2025-32711 and given a CVSS severity score of 9.3 out of 10 – and the IT giant issued an update saying the vulnerability has been patched and requires no user action to fix it.

LLMs, RAG, Agents Become Targets

Large language models (LLMs) and other AI tools have been targeted by threat actors through a variety of methods since OpenAI first rolled out ChatGPT in November 2022 and set off the generative AI era. OWASP (Open Worldwide Application Security Project) last year listed the top 10 security risks facing LLMs.

The RAG framework allows LLMs to access and integrate external data sources outside of the data they were pre-trained on into responses. It’s allowed enterprises to make such LLMs more relevant to their needs by including their corporate data into the AI mix. AI agents are starting to be incorporated now. They are small pieces of AI code that work autonomously to solve complex problems, including finding the needed data – often from corporate sources – collaborating with other agents, and taking actions.

Researchers with Palo Alto Networks’ Unit 42 threat intelligence group in May wrote about the risks involving AI agents – which also were outlined in an OWASP report in February – and showed through simulated attacks on CrewAI and AutoGen, two open source agent frameworks. They noted that “most vulnerabilities and attack vectors are largely framework-agnostic, arising from insecure design patterns, misconfigurations and unsafe tool integrations, rather than flaws in the frameworks themselves.”

“Agentic applications inherit the vulnerabilities of both LLMs and external tools while expanding the attack surface through complex workflows, autonomous decision-making and dynamic tool invocation,” they wrote. “This amplifies the potential impact of compromises, which can escalate from information leakage and unauthorized access to remote code execution and full infrastructure takeover.”

It Starts with an Email

Microsoft 365 Copilot is like a lot of other AI assistants, helping workers to use the technology when using such applications as Word, PowerPoint, SharePoint, OneDrive and Outlook and, now, deploy agents. The LLM Scope Violation vulnerability that Aim outlined could make any data that Copilot has access to vulnerable.

Only members of an organization using M365 can access Copilot, but Aim researchers said their exploit string started by sending a specially crafted email aimed at getting the AI assistant to offer up sensitive data. It’s what the researchers called an indirect prompt injection.

The email includes instructions to collect sensitive information, with the instructions written in a way that indicates they were for the recipient, never mentioning AI, assistants, Copilot or any other AI-related tool, so they bypass any XPIA (cross-prompt injection attack) classifiers developed by Microsoft to detect malicious emails.

Finding the Data

The attack relies on the user asking Copilot for the information requested in the email and then replying to the malicious message with the information, essentially handing the data over to the attacker. The message can also trigger the AI agent – which can operate autonomously without human intervention – in Copilot to grab the data even before being prompted by the victim because it can scan through emails and act on requests even before the recipient has opened the email.

The attacker also has to find ways to bypass other security mechanisms, including Content Security Policy (CSP), which is designed to prevent data from being exfiltrated.

“Lastly, we note that not only do we exfiltrate sensitive data from the context, but we can also make M365 Copilot not reference the malicious email,” the researchers wrote. “This is achieved simply by instructing the ‘email recipient’ to never refer to this email for compliance reasons.”

AI Agents and Their Risks

They wrote that their work “represents a major research discovery advancement in how threat actors can attack AI agents – by leveraging internal model mechanics. … This attack is based on general design flaws that exist in other RAG applications and AI agents.”

The attack results not only in the attacker being able to exfiltrate sensitive data from the LLM, but also the LLM being used against itself, ensuring that the most sensitive data is leaked, they added. It doesn’t rely on specific user behavior and can be run in both single-turn conversations with the LLM – in which the model fully responds to a single request and no back-and-forth exchanges are needed – and multi-turn AI conversations.