Chapter 22: Identity in AI Systems — When the "User" Is an Agent

This is Part 22 — the final chapter — of a walkthrough of my book OpenID: Modern Identity for Developers and Architects. In the previous chapter we covered decentralized identity. Chapter 22 is the frontier: identity and authorization for AI agents.


22.1 — LLM Authentication: The Non-Human Principal

Classical identity systems assume the principal is a human. AI agents break that assumption. An LLM, an autonomous script, a long-running workflow — these need to authenticate, but they can't type a password or touch a biometric sensor.

The current pragmatic answer is OAuth 2.0 Client Credentials: treat the agent as a confidential client, issue it a client_id and client_secret, scope its access aggressively. This works, but it treats every agent the same — no session, no user context, no audit trail back to a human who asked for the action.

The harder question Chapter 22 opens: should agents be treated as applications or as delegated users? Application identity is simple and wrong for anything resembling real autonomy; delegated-user identity is more accurate but requires protocols that can represent "this agent is acting on behalf of this human for this task." The industry is still figuring this out.

22.2 — Agent Authorization and the Model Context Protocol (MCP)

The Model Context Protocol (MCP) is an emerging standard for how AI applications connect to tools, data sources, and services through a uniform interface. Instead of every AI platform hand-rolling integrations with every API, MCP servers expose capabilities in a standard shape, and MCP-aware clients discover and invoke them.

The identity question MCP raises is immediate: authorization creates a delegation chain. The user authorizes the AI application. The AI application authenticates to the MCP server. The MCP server enforces authorization on behalf of both the agent and the delegated user. Three layers, three trust decisions, and every layer needs to preserve audit context.

Key idea: "The agent did it" is never a complete audit statement. "User X asked agent Y to do Z, which invoked tool T against resource R" is. Your identity infrastructure needs to preserve the full chain, not just the last hop.

22.3 — Dynamic Client Registration for Ephemeral Agents

Traditional OAuth 2.0 assumes clients are registered ahead of time. That breaks for AI agents. An agent spun up to answer a single user's single question should not require an operator to provision credentials first.

Dynamic Client Registration (DCR), the same spec we met in Chapter 6, becomes essential here. An orchestrator spawns an agent; the agent registers itself with the authorization server; the server returns scoped credentials; the agent runs; when it's done, credentials are revoked and the record is cleaned up. The whole lifecycle in minutes.

The discipline that makes this safe: scope constraints, registration-time policy checks, and a way to bind the registered agent back to the user or system that initiated it. DCR without policy is an attacker's favorite backdoor; DCR with policy is the foundation of scalable agent identity.

22.4 — Trust in Autonomous Services

Human users make mistakes and usually realize it. Autonomous agents do not always. If an agent executes a destructive API call, the authorization layer should have prevented it from being possible — not relied on the agent's good judgment.

The patterns that matter:

  • Capability-scoped credentials: agents only get the minimum scopes for their task.
  • Human-in-the-loop for destructive actions: some operations require a confirmed intent from a human, surfaced via CIBA (Chapter 4) or similar out-of-band flows.
  • Rate limits and anomaly detection: an agent suddenly making 10,000 calls when its normal pattern is 100 is a signal, regardless of whether the individual calls are authorized.
  • Complete audit trails: every call attributed to the full chain, every call queryable after the fact.
Important: Identity for AI isn't a new problem in the protocols — OIDC, OAuth 2.0, DCR, and CIBA all apply. It is a new problem in the operational model: how aggressively to scope, how to wire human intent into agent decisions, and how to audit a delegation chain that didn't exist in classical identity. The rest of this decade is about getting this right.

What Chapter 22 Sets Up — and What the Book Sets Up

Chapter 22 is the end of the book, but it's not really an ending. It's the start of the next twenty years of identity engineering. AI agents, decentralized credentials, continuous verification, passkeys at scale — all of it builds on the foundation we've walked through: authentication as trust, authorization as a separate concern, tokens as a careful contract, and the discipline to never confuse who you are with what you're allowed to do.

That is the thesis of OpenID: Modern Identity for Developers and Architects, and Chapter 22 is the proof that it generalizes to the problems we don't have good answers for yet. The foundations hold.


That's the series. Thank you for reading. Twenty-two chapters of teasers have tried to change how you think about identity; the book itself tries to give you the tools to build it.

Want the full picture? Grab OpenID: Modern Identity for Developers and Architects here for the complete 22-chapter treatment — from the first password era through the frontier of AI agent identity.
2026-03-28

Sho Shimoda

I share and organize what I’ve learned and experienced.