In a stark reminder that even cutting-edge AI tools need robust input validation, researchers have uncovered a high-severity vulnerability in the OpenClaw package, dubbed OC-19.
Tracked as GHSA-2qj5-gwg2-xwc4 and CVE-2026-27001, this flaw allows attackers to inject malicious content into LLM prompts via unsanitized paths in the current working directory (CWD).
OpenClaw, an npm package for AI-driven workflows, embeds the host system’s CWD (essentially the workspace path) directly into its agent system prompts without proper sanitization.
This oversight becomes a goldmine for attackers who control the runtime environment. Imagine luring OpenClaw into executing from a directory named with control characters, like newlines (\n), Unicode bidirectional overrides (bidi), or zero-width markers.
These sneaky artifacts shatter the prompt’s structure, letting adversaries slip in their own instructions mid-prompt.
The impact? Prompt injection at its finest. Malicious payloads could hijack the AI agent’s logic, forcing it to misuse tools, leak sensitive data, or execute unintended actions.
Picture an enterprise deploying OpenClaw in a shared CI/CD pipeline; an attacker crafts a workspace folder like legit-project\u202Eevil-instructions-here (using U+202E for right-to-left override).
When OpenClaw reads the CWD and stuffs it into the LLM prompt, the bidi trick visually reverses text, injecting commands that the model interprets as legitimate. Suddenly, your AI agent is spilling API keys or running unauthorized scripts.
Affected versions span all releases before 2026.2.15, with the latest vulnerable release being 2026.2.14 (as of February 16, 2026).
Patched versions (>= 2026.2.15) clamp down hard: the fix sanitizes workspace paths by stripping Unicode control and formatting characters, as well as explicit line/paragraph separators. As defense-in-depth, path resolution now gets the same treatment.
The key commit 6254e96acf16e70ceccc8f9b2abecee44d606f79, ensures prompts stay pristine.
Classified as High severity, OC-19 maps to CWE-77: Improper Neutralization of Special Elements used in a Command (‘Command Injection’).
While not classic shell injection, the principle holds that externally influenced input (the CWD) modifies downstream LLM “commands” without neutralization. MITRE’s CWE page nails it: this lets upstream data twist the intended behavior.
For users, immediate action is key. Audit your npm dependencies with npm audit or tools like Snyk, and upgrade to 2026.2.15+. In multi-tenant setups, enforce directory naming policies banning control characters, tools like sanitize-filename can help preemptively.
Developers building LLM agents should adopt prompt templating libraries (e.g., LangChain’s guards) and always sanitize dynamic insertions.
This flaw underscores a broader trend: as AI agents proliferate in DevOps and automation, path-related injections are emerging threats. We’ve seen similar issues in tools like Auto-GPT.
Site: cybersecuritypath.com
Reference: Source
%20(1).webp)
.webp)