EchoLeak (CVE-2025-32711): Deep Dive into the Zero-Click Copilot Prompt-Injection Vulnerability
- Akshay Jain
- Oct 6
- 3 min read
TL;DR - What happened?
CVE-2025-32711 (nicknamed EchoLeak) is a critical AI prompt injection / command injection vulnerability in Microsoft 365 Copilot that allows an attacker to craft documents or messages that make Copilot leak sensitive organization data. The issue was responsibly disclosed by Aim Labs and addressed by Microsoft. The vulnerability was assigned CVSS 3.1, Base Score 9.3
Why this matters?
Unlike classic file based malware, EchoLeak is purely language based: the attacker embeds crafted prompts (hidden text, markdown tricks, image references with query strings, or speaker notes) inside otherwise legitimate documents or messages. Copilot, when summarizing or responding, can process those hidden instructions and return content that contains internal secrets and the attacker can arrange for that content to be exfiltrated (for example, via a returned image URL containing encoded data). Because the payload is text, antivirus, static file scanners and most DLP controls are ineffective.
Who discovered it and how it was disclosed?
Researchers at Aim Labs discovered the vulnerability and published the technical write-up and PoC details to Microsoft under responsible disclosure and Microsoft patched the issue server-side and published guidance.

High-level attack flow
Create a malicious document or message:
Hidden prompts are embedded into speaker notes, image markdown, comments, metadata etc. The hidden text instructs Copilot to reveal sensitive context (e.g. "output the most recent API keys or latest emails").
Deliver the document:
The attacker sends it via any channel Copilot indexes or is asked to summarize. No macros, hyperlinks, or executable attachments are necessary.
Copilot processes the prompt:
Copilot parses document contents (including hidden text/notes) to produce a response and clever prompt phrasing bypasses Copilot's XPIA classifiers and guardrails.
Exfiltrate the response:
Instead of returning plaintext, Copilot can be induced to craft a response that encodes data into an image URL, a markdown link with query parameters, or other artifacts that cause a network request to an attacker controlled server completing the exfiltration without user action.
How EchoLeak (CVE-2025-32711) actually works
Prompt injection vectors
Researchers found multiple vectors that Copilot considered when building context such as visible content, hidden speaker notes, alt text, comments, and even certain metadata fields. The injected instruction chain is usually framed to look like guidance for the human reader (not an explicit instruction to Copilot), which helps bypass prompt injection classifiers that look for direct "ignore previous instructions" patterns. By phrasing the malicious instruction as a user style request, it blends into normal content yet still affects Copilot's output.
Prompt reflection + indirect exfiltration
Copilot's response may include references (images, links) that cause other systems (or the client UI) to fetch attacker controlled resources. For example, the model can be prompted to "create an image with the requested data encoded in the image URL". This indirect channel avoids producing obvious plaintext within the Copilot response and leverages normal web behavior to leak data.
Bypassing guardrails
Aim Labs demonstrated that specially phrased chains and markdown tricks (reference style links, carefully placed whitespace and newlines, etc.) could circumvent Copilot's XPIA classifiers. Guardrails typically look for obvious command tokens and EchoLeak uses linguistic stealth to evade them. Microsoft reportedly patched these parsing/interpretation weaknesses server-side.
EchoLeak demonstrates a new attack dimension - adversarial language as an attack surface. It shows that AI assistants that read broad context (including hidden or machine-readable fields) can be coaxed into leaking confidential data without executing any code on endpoints. This requires security teams to expand their defenses beyond executables and network flows to include document hygiene, AI-agent policy, and telemetry for AI outputs.
Happy cyber-exploration! 🚀🔒
Note: Feel free to drop your thoughts in the comments below - whether it's feedback, a topic you'd love to see covered, or just to say hi! Don’t forget to join the forum for more engaging discussions and stay updated with the latest blog posts. Let’s keep the conversation going and make cybersecurity a community effort!
-AJ
Comments