Enterprises are handing the keys to AI agents. These agents book meetings, query databases, write code, and trigger financial workflows all without a human clicking approve. The question nobody is asking until it is too late: when one of these agents is compromised, who owns the breach?
Agentic AI is no longer a pilot project or a conference talking point. According to Gartner, by the end of 2026, over 40% of enterprise applications will embed role-specific AI agents. JPMorgan Chase runs agents that autonomously monitor millions of transactions and rewrite their own fraud-detection rules in real time. Siemens uses agentic orchestration across global supply chains to reroute logistics without human sign-off. Ford deploys agents that take a sketch and run it through engineering stress tests end-to-end, in seconds.
The productivity gains are real. But so is a new accountability gap that most organizations have not closed.

A scenario that already happened

This is the pattern. It does not start with a nation-state adversary or an advanced persistent threat. It starts with an employee who connected an AI agent to corporate systems, granted it broad permissions to make it work smoothly, and moved on. The agent became a persistent, credentialed, largely unmonitored identity inside the enterprise.

Why agents break traditional security models
Every security framework built over the last two decades was designed for a simple mental model: a human takes an action, we log it, we review it, we audit it. Agents collapse this model. They act autonomously, across systems, at machine speed. They do not wait for approval. They do not leave the kind of audit trail a human session does.
Consider a standard enterprise AI agent deployment: the agent has read-write access to the CRM, can query the HR database for contact information, has API keys to cloud infrastructure, and can send emails via the corporate mail server. Each of those individual permissions might be reasonable in isolation. Together, they make the agent one of the most privileged identities in the organization and almost certainly the least monitored one.

This attack type called prompt injection is not theoretical. It was the most actively exploited agentic attack vector throughout Q4 2025, according to Lakera AI’s analysis across enterprise environments. The attack does not require hacking the agent. It requires tricking it, which is significantly easier.

So, who is actually responsible?
When a breach like this happens, accountability fractures across multiple layers. This is where enterprises are currently unprepared. There is no single owner, which in practice means nobody is watching.

“Under GDPR and emerging AI regulation frameworks, your organization is liable for data breaches caused by your agents, regardless of whether a human explicitly authorized the data release.”
What responsible deployment actually looks like
Only 10% of organizations currently have a developed strategy for managing non-human and agentic identities, according to a 2026 Okta survey of 260 executives. The remaining 90% are deploying agents with the same identity hygiene practices they use for service accounts which is to say, very little.
The controls that actually reduce risk are not exotic. They are the fundamentals applied to a new class of identity:
- Treat every AI agent as a privileged identity. Apply the same lifecycle management, access reviews, and offboarding procedures you use for human accounts.
- Enforce least privilege at provisioning, not after deployment. Agents should have access to exactly what they need for their defined function nothing more.
- Implement human-in-the-loop checkpoints for any action with financial, operational, or external communication impact. The agent should propose; a human should confirm.
- Log agent behavior as a first-class audit trail. Every tool call, every external request, every data access should be attributable and reviewable.
- Test agents for prompt injection before production deployment. Red-team exercises specifically designed for agentic systems are now a standard part of responsible AI rollout.
- Review OAuth scopes and third-party integrations quarterly. Revoke permissions that are no longer necessary. Treat ‘Allow All’ as a security incident waiting to happen.
The governance gap is the real vulnerability
The Vercel incident did not require a sophisticated attacker. It required a credentialed agent, an overpermissioned OAuth token, and no one checking whether that combination made sense. That is the state of most enterprise AI deployments right now.
78% of agents involved in 2025 and 2026 breaches had significantly broader permission scopes than their designated function required. This is not a technology failure. It is an organizational one. Teams deploy agents under delivery pressure, grant broad access to make them work, and intend to tighten permissions later. Later rarely comes.
Agentic AI will keep expanding inside enterprise environments. Gartner’s projection of 40% of enterprise applications embedding agents by end of 2026 is almost certainly conservative given current adoption rates. The organizations that come through this period without a major agent-driven incident will be the ones that treated agent security as a governance question from day one not a remediation task after something breaks.
The agent does not know it got used. Your incident response team does not have a playbook for it. Your audit logs may not capture it. That gap is the vulnerability.





