Varonis researchers discover new method to bypass Microsoft Copilot and steal user data
- Marijan Hassan - Tech Journalist
- 19 minutes ago
- 2 min read
Dubbed Reprompt Attack, the method allows single-click exfiltration of sensitive user data.

Security researchers at Varonis Threat Labs have uncovered a dangerous new attack vector dubbed "Reprompt," which weaponizes Microsoft Copilot against its own users. By clicking a single, legitimate-looking Microsoft link, a user can inadvertently trigger an invisible chain of events that allows an attacker to harvest sensitive information, ranging from recent file summaries to the user's physical location. All this without a single line of malicious code ever touching the victim's device.
The discovery, disclosed last week, highlights a critical "blind spot" in AI security. The inability of LLMs to distinguish between a user’s genuine intent and instructions smuggled into a URL.
The three-stage attack chain
The attack happens in three steps and does not require the victim to install anything or interact beyond clicking a single link.
Step 1: Hidden instructions in a link
Hackers can hide secret instructions inside a normal Copilot web link. When someone clicks the link, those instructions run automatically as soon as the page opens, without the user seeing or typing anything.
Step 2: Tricking Copilot’s safety checks
Copilot has safety rules that are meant to stop it from revealing sensitive information. The attackers found a way to get around this by making Copilot repeat the same action twice. On the first attempt, Copilot usually hides the sensitive data, but on the second attempt, it may reveal it fully.
Step 3: Quiet data theft in the background
The hidden instructions also tell Copilot to keep sending information to a server controlled by the attackers.
This creates a silent connection that can keep leaking data in the background, even after the user closes the Copilot chat.
What can be stolen?
In their proof-of-concept, Varonis demonstrated that a single click could allow an attacker to "probe" the AI for virtually anything the user has access to:
File access history: Summaries of documents the user recently opened in OneDrive or SharePoint.
Identity & location: The user's name, precise geographic coordinates, and upcoming travel plans.
Darkest secrets: Previous chat histories, drafted emails, and private notes stored within the Microsoft ecosystem.
Microsoft's response and mitigation
Following a responsible disclosure by Varonis, Microsoft issued a patch during the January 2026 Patch Tuesday cycle.
The company also confirmed that Microsoft 365 Copilot (Enterprise) was not affected by this specific vector due to its more rigid tenant-level data loss prevention (DLP) controls. The vulnerability primarily targeted Copilot Personal.
The bottom line
While this specific bug is patched, Varonis warns that "prompt injection" remains a fundamental challenge for AI. They recommend that users treat Copilot deep links with the same suspicion as unsolicited email attachments and that organizations implement "real-time behavioral governance" to watch for unusual AI-driven data movements.













