The Salesforce “ForcedLeak” Incident Is a Symptom — Identity & Control Plane Failures Are the Disease. Thoughts from three experts
Last week, a security researcher dropped a bombshell: Salesforce’s Agentforce platform had a flaw they dubbed “ForcedLeak,” in which an attacker bought an expired Salesforce-allowed domain for $5, then tricked the AI agent into exfiltrating sensitive CRM lead data via that domain. That domain had been in their Content Security Policy, so the system allowed it. The prompt injection payload did the rest. (Salesforce has since patched the hole.) This isn’t some exotic, fringe exploit. It’s a warning shot: if such an attack can succeed in a high-profile SaaS environment, then every organization embracing AI agents needs to ask: are we building on sand?
John Spiegel
9/28/20254 min read


Last week, a security researcher dropped a bombshell: Salesforce’s Agentforce platform had a flaw they dubbed “ForcedLeak,” in which an attacker bought an expired Salesforce-allowed domain for $5, then tricked the AI agent into exfiltrating sensitive CRM lead data via that domain. That domain had been in their Content Security Policy, so the system allowed it. The prompt injection payload did the rest. (Salesforce has since patched the hole.)
This isn’t some exotic, fringe exploit. It’s a warning shot: if such an attack can succeed in a high-profile SaaS environment, then every organization embracing AI agents needs to ask: are we building on sand?
Because the core vulnerabilities revealed here are not merely about careless configuration or sloppy patching. They point to deeper architectural sins: weak identity management for agents, and the collapse of clear separation between control plane and data plane.
This is all interesting to me because it is the same message I heard again and again from three experts I interviewed on the “No Trust” Podcast I co-host with Jaye Tillson. Let’s dive in.
Richard Bird’s Warning: Identity Isn’t Just for People Anymore
Richard Bird is a well-known cybersecurity identity expert. Richard was blunt: we’ve spent the last decade getting identity wrong, and now the AI era raises the stakes even higher. During his recent interview on our show, he reminds us that authentication isn’t usually the failure point. Its authorization and identity misuse. The fact that we still think about identity mainly in terms of “users behind keyboards” is a mistake.
Bird’s call is clear: identity must extend beyond people. Service accounts, IoT, and now autonomous agents all need their own unique, enforceable identities. If we fail here, then every control plane we build on top will inherit flawed assumptions. Result, our adversaries exploit the cracks.
The ForcedLeak exploit is a perfect case in point. It’s likely the AI agent didn’t have a crisp, independently verifiable identity with narrowly scoped privileges. Instead, it acted under a broader umbrella of trust, where a poisoned prompt and a $5 domain were enough to punch through. Bird’s point lands hard: without agent identity as a first-class citizen, trust collapses before it even begins.
Control Plane vs. Data Plane: AI Blurs the Line—You Must Reinforce It
George Finney, author of the book “The Rise of the Machines”, when interviewed on the podcast, issues similarly stark warning: AI tends to erode architectural demarcations. In legacy systems, the control plane (policy, authorization, orchestration) is separate from the data plane (traffic, payloads, flows). But with AI agents, we let them ingest input, reason over it, and then act. That reasoning becomes part of the control signal.
In simpler terms: your data becomes your commands. The attacker’s payload was a prompt (data) that drove behavior (control). The system trusted the AI agent’s internal reasoning without a gatekeeper strong enough to detect or block the deviation.
Unless you reassert that separation, using policy enforcement points, strict allow-lists, validation barriers, sandboxing, you’ll effectively let the agent be both judge and executor. That’s how you get forced leaks.
Governance, Segmentation, & Kill Switches — Not Add-Ons, But Core Requirements
Josh Woodruff, author of the book Agentic AI + Zero Trust: A Guide for Business Leaders, when interviewed on the podcast, makes it clear: identity alone won’t save you. In his view, every AI agent must operate inside defined guardrails, with segmentation, monitoring, and circuit breakers built in from the start. Without those constraints, authentication is meaningless — an “approved” agent can still run amok.
In simpler terms: agents can’t just be trusted to do the right thing because they passed an identity check. They need constant oversight. Telemetry must track their behavior in real time, policies must limit what systems they can touch and kill switches must stand ready to halt them the moment they deviate. Unless you design those controls into the architecture, you effectively hand the agent unchecked authority and that’s how small errors or poisoned prompts turn into systemic failures.
The Triad You Can’t Skip: Identity, Control Plane, Data Plane
If I were to reduce this to a mantra—an operational imperative for any AI program—it’s this triad:
Agent Identity Must Be Crisp
As Bird argues, identity is the foundation. Each agent must have its own principal with a full credential lifecycle, revocation path, and auditability.Enforce Separation Between Control and Data Planes
Taking from Finney, don’t let data masquerade as commands. Policy enforcement must be immune to prompt corruption.Governance, Monitoring, Kill Switches
As Woodruff commands, bake in constraints, anomaly detection, and shutdown capability from day one.
If any one leg of the stool is weak, the others topple with it.
Tactical Prescriptions
Give every agent its own identity: short-lived certs, tokens, or keys. No shared or inherited logins.
Start with zero privilege and add only what’s necessary.
Restrict endpoints with runtime allow-lists and certificate validation.
Treat all inputs as hostile, sanitize them, and validate outputs.
Use segmentation and micro-segmentation to wall off agent functions.
Monitor behavior continuously, alert on anomalies, and keep logs auditable.
Build in kill switches and circuit breakers you can actually use.
Run red-team exercises against prompts and domains before going live.
Deploy incrementally; don’t let agents loose in crown-jewel systems on day one.
The Final Word
ForcedLeak isn’t an isolated mistake. It’s a preview of what happens when identity is half-baked, when control planes are porous, and when AI agents are trusted without limits.
AI agents are coming fast. They’ll change how we work, how we build, how we defend. But if identity, control planes, and data planes aren’t re-engineered with Zero Trust discipline, then incidents like ForcedLeak won’t be the exception. They’ll be the norm.