Microsoft, Google, Apple, Windows, Android, iOS, Internet, Cyber Security, Hacking, Malware, Smartphone, Mobile App

Trending

Zero-click AI data leak flaw uncovered in Microsoft 365 Copilot

A new attack dubbed ‘EchoLeak’ is the first known zero-click AI vulnerability that enables attackers to exfiltrate sensitive data from Microsoft 365 Copilot from a user’s context without interaction.

The attack was devised by Aim Labs researchers in January 2025, who reported their findings to Microsoft. The tech giant assigned the CVE-2025-32711 identifier to the information disclosure flaw, rating it critical, and fixed it server-side in May, so no user action is required.

Also, Microsoft noted that there’s no evidence of any real-world exploitation, so this flaw impacted no customers.

Microsoft 365 Copilot is an AI assistant built into Office apps like Word, Excel, Outlook, and Teams that uses OpenAI’s GPT models and Microsoft Graph to help users generate content, analyze data, and answer questions based on their organization’s internal files, emails, and chats.

Though fixed and never maliciously exploited, EchoLeak holds significance for demonstrating a new class of vulnerabilities called ‘LLM Scope Violation,’ which causes a large language model (LLM) to leak privileged internal data without user intent or interaction.

As the attack requires no interaction with the victim, it can be automated to perform silent data exfiltration in enterprise environments, highlighting how dangerous these flaws can be when deployed against AI-integrated systems.

How EchoLeak works

The attack begins with a malicious email sent to the target, containing text unrelated to Copilot and formatted to look like a typical business document.

The email embeds a hidden prompt injection crafted to instruct the LLM to extract and exfiltrate sensitive internal data.

Because the prompt is phrased like a normal message to a human, it bypasses Microsoft’s XPIA (cross-prompt injection attack) classifier protections.

Later, when the user asks Copilot a related business question, the email is retrieved into the LLM’s prompt context by the Retrieval-Augmented Generation (RAG) engine due to its formatting and apparent relevance.

The malicious injection, now reaching the LLM, “tricks” it into pulling sensitive internal data and inserting it into a crafted link or image.

Aim Labs found that some markdown image formats cause the browser to request the image, which sends the URL automatically, including the embedded data, to the attacker’s server.

Overview of the attack chain
Overview of the attack chain
Source: Aim Labs

Microsoft CSP blocks most external domains, but Microsoft Teams and SharePoint URLs are trusted, so these can be abused to exfiltrate data without problem.

The EchoLeak attack in action
The EchoLeak attack in action
Source: Aim Labs

EchoLeak may have been fixed, but the increasing complexity and deeper integration of LLM applications into business workflows are already overwhelming traditional defenses.

The same trend is bound to create new weaponizable flaws adversaries can stealthily exploit for high-impact attacks.

It is important for enterprises to strengthen their prompt injection filters, implement granular input scoping, and apply post-processing filters on LLM output to block responses that contain external links or structured data.

Moreover, RAG engines can be configured to exclude external communications to avoid retrieving malicious prompts in the first place.

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy