Anthropic’s Claude Chrome extension, now used by over 3 million people, was recently revealed to host a dangerous zero‑click vulnerability that could have allowed an attacker to silently inject malicious prompts into the AI assistant simply by tricking a victim into visiting a webpage.
The issue, dubbed “ShadowPrompt,” demonstrates how extending highly‑privileged AI agents into the browser can dramatically expand an organization’s attack surface when trust boundaries and message‑origin checks are not rigorously enforced.
The attack chain exploited two underlying weaknesses: an overly permissive origin allowlist in the extension and a DOM‑based cross‑site scripting (XSS) flaw in an Arkose Labs CAPTCHA component hosted on a‑cdn.claude.ai.
The extension’s messaging API listened for a message type named onboarding_task, which accepted a prompt parameter and passed it directly to Claude’s sidebar, originally intended for an onboarding demo.
Crucially, the extension did not restrict which origins could send this message; any domain under *.claude.ai was treated as trusted, including third‑party CAPTCHA infrastructure.
That condition opened the door to abuse once researchers discovered a vulnerable CAPTCHA game‑core component hosted on a‑cdn.claude.ai. The component accepted postMessage messages from any web page, but never validated the sender’event.origin, allowing adversaries to inject arbitrary data into the app state.
One of the injected fields, stringTable, controlled UI strings and was rendered into the page using React’s dangerouslySetInnerHTML, with no sanitization, enabling attacker‑supplied HTML payloads such as <img src=x onerror="..."> to execute JavaScript in the context of a‑cdn.claude.ai.
Once JavaScript execution was achieved on the allowed subdomain, the malicious script used chrome.runtime.sendMessage() to send an onboarding_task message directly to the Claude extension.
Because the origin matched *.claude.ai, the extension routed the attacker‑controlled prompt into Claude’s sidebar as if the user had typed it, triggering actions like reading Gmail, accessing Google Drive, exporting chat history, or even sending emails on the victim’s behalf, all without clicks, dialogs, or permission prompts.
From a browser‑security perspective, the episode highlights two common failure modes: over‑broad origin allowlists and insufficient validation of third‑party content hosted on first‑party domains.
Modern Chrome extensions routinely expose chrome.runtime.sendMessage() handlers to web pages, but permissive wildcards such as *.claude.ai blur the boundary between “internal” and “potentially attacker‑reachable” surfaces, especially when CAPTCHA or analytics code runs on shared subdomains.
The case also illustrates how DOM‑based XSS in deeply nested, versioned assets (here, an Arkose Labs game‑core bundle) can become a pivot point for targeting high‑value browser extensions.
At a broader architectural level, the ShadowPrompt chain reflects a growing class of agentic‑browser threats. When an AI assistant can navigate tabs, read page content, execute scripts, and interact with web services on a user’s behalf, it effectively becomes a semi‑autonomous agent with broad ambient privileges.
Attackers no longer need full remote code execution on the endpoint; instead, they can hijack the agent’s trust model, turning the assistant into a vector for credential theft, data exfiltration, and lateral movement across SaaS applications.
Anthropic eventually mitigated the issue by tightening the extension’s origin check to accept messages only from https://claude.ai and by coordinating with Arkose Labs to patch the CAPTCHA XSS, the vulnerable game‑core URL was removed and now returns a 403 error.
For defenders, the takeaway is that AI‑driven browser extensions must be treated as high‑risk assets, with strict origin‑isolation, rigorous third‑party vetting, and continuous inventorying across organizational fleets to detect and remove overly permissive or outdated add‑ons.
Site: cybersecuritypath.com
Reference: Source