Who Approved This Agent? Rethinking IAM in the AI Agent Era
AI agents are no longer new in enterprises. They schedule meetings, access data, run workflows, write code, and make decisions in real time. Productivity has increased far beyond human capability limits.
Until one day the security team asks:
“Wait… who approved this action?”
A simple question that always had an answer before, now cannot be easily answered.
AI Agents Break Traditional Access Models
The problem is not that agents are malicious. The problem is AI agents are neither humans nor service accounts. They are entirely new entities that do not fit at all with the IAM models we have used for the past 20 years.
| Entity Type | Ownership | Permission Scope | Accountability |
|---|---|---|---|
| Human | Clear | Limited by role | Clear |
| Service Account | Clear | Limited by function | Clear |
| AI Agent | Often unclear | Expands over time | Unclear |
Once an agent is permitted to run, it does not stop. They operate 24/7, integrate with new systems, accumulate permissions over time, and slowly become entities with broader access than anyone in the company.
This is access drift on steroids.
Agents do not just automate human actions. They extend human access. A user who should not access certain data can ask an agent to do it. Technically the access is valid – the agent has permission. But contextually, it is a security gap undetected by traditional IAM.
3 Types of AI Agents and Their Risks
Not all agents carry the same risk. There is a risk hierarchy you need to understand.
Personal Agent (User-Owned)
Agents owned by individual employees, operating within that user permission boundaries. If the user loses access, the agent also loses access.
- Risk: Low
- Blast radius: Only that user
- Oversight: Easy
Third-Party Agent (Vendor-Owned)
Agents embedded in the SaaS platforms you use. Examples: AI features in CRM, collaboration tools, or security platforms.
- Risk: Medium
- Accountability: Lies with vendor
- Primary risk: Supply chain
Organizational Agent (Shared)
This is the ticking time bomb that almost every company has but is unaware of. Agents shared between teams, running in the background, having broad permissions across systems, and having no clear owner.
This is the highest risk type of agent. When something goes wrong, no one is responsible. No one knows exactly what this agent can do, and no one dares to turn it off.
The Authorization Bypass Problem No One Talks About
This is the most dangerous and least understood risk:
AI agents do not just execute commands. They become access intermediaries.
Instead of users interacting directly with systems, users talk to agents, and agents talk to systems. Agents use their own credentials, their own tokens, and their own permissions.
The result:
- Access is technically valid
- No alerts in SIEM
- No logs showing the user performed that action
- User never had permission to perform that action
This is perfect authorization bypass. And it happens every day in thousands of companies right now.
What Must Change
Securing AI agents cannot be done by patching old IAM. You need to rethink the entire risk model.
1. Every agent MUST have an owner
No exceptions. Without a clear owner, an agent must not run.
2. Map User, Agent, and System relationships
Do not just know what an agent can access. Know who can invoke that agent, and with what authority.
3. Agent permissions must expire
Do not grant permanent permissions. Every 30 days there must be a review.
4. All agent actions must be auditable
Every action taken by an agent must have a clear trail: who requested it, when, and what the result was.
Closing
AI agents are the largest productivity boost since the internet. But they are also the biggest security threat we have ever faced.
The problem is not the technology. The problem is we are still using 2005 security rules to govern 2026 technology.
Until we dare to rethink access models, the question “who approved this?” will remain unanswered. And one day, when an incident occurs, no one can be blamed.
This article is based on analysis from Wing Security about AI agent risks in enterprise environments.