Three AI Agent Platforms Hit With Their Worst-Ever Security Flaws in Two Weeks
Flowise, Langflow, and PraisonAI - three widely used AI agent builders - all suffered maximum-severity security flaws within a two-week window. Flowise is under active attack, with hackers stealing API keys from thousand
Three different AI agent platforms just got hit with the most severe security rating possible - and none of them are related to each other. That's not a coincidence. That's a pattern.
TL;DR: Flowise, Langflow, and PraisonAI - three widely used AI agent builders - all suffered maximum-severity security flaws within a two-week window. Flowise is under active attack, with hackers stealing API keys from thousands of exposed instances. The common thread: these platforms let AI agents execute code, and the walls meant to contain that code keep breaking.
What Happened
Here's the timeline.
Flowise - the drag-and-drop AI workflow builder - got tagged with CVE-2025-59528. CVSS 10.0, which is the absolute ceiling. An attacker with just an API token can execute arbitrary code on the server. VulnCheck confirmed active exploitation from a Starlink IP in early April. Between 12,000 and 15,000 instances are still sitting exposed on the internet. The payloads hitting those servers right now? Info stealers, reverse shells, cryptominers. But here's the thing - the real target is your API keys. OpenAI keys, Anthropic keys, AWS credentials. All stored on those servers.
Then Langflow. CVE-2026-33017. CISA added it to their Known Exploited Vulnerabilities catalog - that's the federal government saying "this is being used against real targets right now." The flaw allows completely unauthenticated remote code execution. The first working exploits appeared within 20 hours of public disclosure.
Then PraisonAI. CVE-2026-34938. Another perfect CVSS 10.0. A multi-agent framework where the sandbox - the thing designed to keep agent-executed code from touching your actual system - can be fully bypassed by passing a specially crafted input that tricks the safety filter into ignoring it. No privileges required. No user interaction. Just code execution on your machine.
Why This Matters
So three unrelated platforms, three different codebases, three different development teams - all failing at maximum severity in the same two-week window. The pattern is the story.
All three platforms share one architectural feature: they let AI agents execute code on a server. And in all three cases, the mechanism designed to contain that execution broke. The sandbox didn't hold. The authentication didn't check. The safety filter got tricked.
If you're running any tool that lets an AI agent write and execute code - workflow builders, multi-agent systems, automation platforms - this is your risk category.
What To Do Right Now
Check what you're running. If you use Flowise, Langflow, or PraisonAI, update immediately. Flowise and PraisonAI patches are available. Langflow: upgrade to 1.9.0 or later.
Audit your API keys. Any API key stored on a server running one of these platforms should be rotated. If you don't know whether your keys are exposed, assume they are.
Ask the containment question. For any AI agent tool in your stack: does it execute code? If yes, what's the wall between the agent's code and your system? If you can't answer that, you have the same class of risk these three platforms just demonstrated.