Langflow RCE Exploited in 20 Hours: How a Single API Endpoint Gave Attackers the Keys to AI Pipelines

5–7 minutes

·

·

On March 17, 2026, a security advisory dropped for CVE-2026-33017, a code injection flaw in Langflow, the open-source framework used to build AI agents and Retrieval-Augmented Generation pipelines. Within 20 hours, attackers had working exploits. Within 24 hours, they were harvesting environment variables and database credentials from exposed instances. No public proof-of-concept code existed at the time. The advisory itself contained enough detail for threat actors to build their own.

On March 25, CISA added the flaw to its Known Exploited Vulnerabilities catalog and set an April 8 deadline for federal agencies to patch or stop using the product entirely.

The Vulnerability

CVE-2026-33017 is an unauthenticated remote code execution flaw with a CVSS score of 9.3. It affects Langflow versions through 1.8.1 and targets the /api/v1/build_public_tmp/{flow_id}/flow endpoint, which is designed to let unauthenticated users build public flows.

The problem: this endpoint accepts attacker-supplied flow data containing arbitrary Python code. Langflow passes that code to Python’s exec() function without sandboxing. A single crafted HTTP POST request with a JSON payload is all an attacker needs. No authentication. No CSRF tokens. No multi-step chain.

The flaw is distinct from CVE-2025-3248, which CISA flagged for active exploitation in May 2025. That earlier bug hit a different Langflow endpoint (/api/v1/validate/code). The fix for CVE-2025-3248 added authentication to the validation endpoint. But the same class of vulnerability, unsandboxed exec() on user-supplied code, persisted on the public flow build endpoint. Security researcher Aviral Srivastava found CVE-2026-33017 while examining how the maintainers had patched the earlier flaw.

When Langflow’s AUTO_LOGIN setting is true (the default), all prerequisites for exploitation can be met by an unauthenticated attacker: log in via the auto-login endpoint to get a superuser token, create a public flow, then exploit the build endpoint with embedded malicious code.

How Attackers Moved

Sysdig’s Threat Research Team observed the timeline with precision. Their honeypot fleet, deployed across multiple cloud providers and regions with vulnerable Langflow instances, detected the first exploitation attempts 20 hours after the advisory went public on GitHub. Python-based exploit scripts appeared at the 21-hour mark. By 24 hours, attackers were exfiltrating .env and .db files from compromised instances.

The speed is notable because no public proof-of-concept repository existed on GitHub at the time. Sysdig confirmed that attackers constructed working exploits directly from the information in the advisory, which documented the vulnerable endpoint path and the mechanism for code injection via flow node definitions. The advisory description was, in effect, a recipe.

Exfiltrated data included API keys for OpenAI, Anthropic, and AWS, along with database connection strings. Compromising a single Langflow instance can provide lateral access to cloud accounts, data stores, and connected enterprise systems. For organizations running Langflow as middleware between LLMs and internal databases, the blast radius extends well beyond the tool itself.

Why AI Tooling Is a High-Value Target

Langflow has over 145,000 GitHub stars and is popular among data science teams for its drag-and-drop interface that connects AI nodes into executable pipelines. That popularity translates to a large number of exposed instances, many deployed by teams that do not follow the same patching cadence as production infrastructure.

This pattern is now a recognized attack surface. The LiteLLM and Ultralytics supply chain compromises earlier in March 2026 targeted the same category: AI development tools with broad API access deployed outside standard security review processes. The Trivy supply chain attack, which CISA also added to its KEV catalog this week as CVE-2026-33634, likely fed directly into the LiteLLM compromise.

AI workflow tools are particularly attractive because they concentrate credentials. A single Langflow instance might hold keys for multiple LLM providers, vector databases, cloud storage, and internal APIs. The tools are designed to orchestrate access across services, which means they store exactly the secrets an attacker wants.

Sysdig researchers described this as a structural shift, not an outlier. Their Zero Day Clock analysis showed the window between vulnerability disclosure and active exploitation collapsing from months to hours. For AI-adjacent tools with large install bases and credential-rich environments, the incentives for rapid weaponization are obvious.

The Recurring Pattern

CVE-2026-33017 is the second critical Langflow RCE in under a year. Both flaws shared the same root cause: unsandboxed execution of user-supplied Python code. The first fix addressed one endpoint. The second vulnerability appeared on a different endpoint using the same dangerous pattern.

This is a code architecture problem, not a one-off bug. When a framework’s core design routes user input through exec() in multiple places, fixing one path does not eliminate the class of vulnerability. Security researcher Srivastava found the second flaw specifically by following the first fix to see if the same pattern recurred. It did.

For developers building on Langflow or similar low-code AI platforms, the lesson is specific: do not assume that a patched vulnerability means the underlying design flaw is resolved. Audit the codebase for the same class of issue across all endpoints, not just the one that was reported.

What to Do Now

Langflow version 1.9.0 addresses CVE-2026-33017. Organizations running earlier versions should upgrade immediately or, if that is not possible, disable or restrict the vulnerable endpoint and ensure Langflow is not exposed directly to the internet.

Sysdig’s recommended response goes further. Any Langflow instance that was publicly accessible before the patch should be treated as potentially compromised. Rotate all API keys, database credentials, and cloud secrets stored in or accessible through the instance. Monitor outbound traffic for connections to unusual callback services. Audit environment variables for signs of exfiltration.

CISA’s April 8 deadline formally applies to federal civilian agencies under Binding Operational Directive 22-01, but the agency recommends all organizations treat it as a benchmark. If upgrading is impossible and no workaround is viable, CISA says to stop using the product.

The Bigger Picture

The AI tooling ecosystem is repeating the early cloud era’s security mistakes. Teams deploy powerful orchestration platforms with broad credential access in environments that lack the monitoring, patching discipline, and network segmentation of production infrastructure. Data science teams move fast. Security teams do not always know what is running.

CVE-2026-33017 and the supply chain attacks on LiteLLM and Ultralytics point to the same conclusion: AI workloads are landing in attackers’ crosshairs because they offer high-value data, software supply chain access, and often lack the security controls applied to traditional production systems. Every organization deploying AI development tools should be asking whether those tools are inventoried, patched, network-restricted, and monitored to the same standard as their production databases.

For most teams, the honest answer is no. That gap is what attackers are counting on.

CISA added Langflow CVE-2026-33017 to its Known Exploited Vulnerabilities catalog on March 25. Attackers built working exploits from the advisory alone within 20 hours, with no public proof-of-concept code. The flaw gives unauthenticated remote code execution on any exposed Langflow instance. Here is the full attack chain, why AI tooling is uniquely high-value, and what…

Feature is an online magazine made by culture lovers. We offer weekly reflections, reviews, and news on art, literature, and music.

Please subscribe to our newsletter to let us know whenever we publish new content. We send no spam, and you can unsubscribe at any time.

← Back

Thank you for your response. ✨

Designed with WordPress.