Most API documentation is written for humans. It assumes a reader who can infer context, handle ambiguity, and make reasonable assumptions about edge cases.
Agents don’t work that way.
When an AI agent consumes your API—whether directly through an OpenAPI specification or indirectly through MCP tool descriptions—it reads the specification literally. Every ambiguity is a potential failure mode. Every undocumented behavior is a surprise waiting to happen. Every implicit convention is a gap the agent will fill with a guess.
The agent experience (AX) mindset
Just as we talk about developer experience (DX), we need to think about agent experience (AX). What does an agent need to successfully consume your API?
Don’t rely on naming conventions or implicit understanding. If status: "active" means the order is ready to ship, say that explicitly in the schema description. Agents don’t share the institutional knowledge your team has built over years.
Agents also use examples for few-shot learning, so provide examples for every endpoint—including edge cases and error scenarios. An example of a successful response paired with an example of each error type gives the agent a concrete model for what to expect.
Your error taxonomy matters more than you think. A generic 400 response doesn’t help an agent decide whether to retry, modify the request, or escalate to a human. Distinguish between client errors that can be fixed (invalid parameter format), client errors that can’t (insufficient permissions), and server errors that might resolve on retry.
Document idempotency guarantees explicitly. Agents may retry operations—especially when network timeouts leave the outcome ambiguous. An agent that doesn’t know whether a failed POST /orders actually created the order will either duplicate it or lose it. Provide idempotency key patterns for operations that aren’t naturally idempotent.
Schema design principles
Use enums liberally. If a field has a finite set of valid values, enumerate them in the schema. Agents struggle with freeform strings where the valid options are documented only in prose paragraphs. An enum of ["pending", "processing", "shipped", "delivered", "cancelled"] is unambiguous. A description that says “status can be any standard order status” is not.
Avoid polymorphism where possible. Discriminated unions are manageable, but patterns where the same field can be completely different types based on context cause problems. If response.data is sometimes an object and sometimes an array, an agent has to handle both cases—and it will often handle the less common one incorrectly.
Put constraints in the schema, not just the documentation. Minimum/maximum values, string patterns, array length limits—these give agents hard boundaries they can validate against before making a request.
Be explicit about nullability. Is this field optional (may be absent), or can it be present but null? These are different conditions with different semantics. In JSON Schema terms, required and nullable are independent properties. Agents that encounter an unexpected null in a required field will handle it unpredictably.
OpenAPI: before and after
The difference between a human-readable API and an agent-ready API often comes down to the quality of descriptions and the completeness of schemas. Consider this operation:
Before—human-readable but agent-ambiguous:
paths:
/orders:
post:
summary: Create order
requestBody:
content:
application/json:
schema:
type: object
properties:
items:
type: array
customer_id:
type: string
After—agent-ready with explicit semantics:
paths:
/orders:
post:
summary: Create order
description: >
Creates a new order for the specified customer. Requires at
least one item. Returns the created order with an assigned
order_id. Fails with 409 if an order with the same
idempotency_key already exists. Idempotent when
idempotency_key is provided.
requestBody:
content:
application/json:
schema:
type: object
required: [customer_id, items]
properties:
customer_id:
type: string
format: uuid
description: >
UUID of the customer placing the order. Must
reference an existing, active customer account.
items:
type: array
minItems: 1
maxItems: 100
items:
type: object
required: [sku, quantity]
properties:
sku:
type: string
pattern: "^[A-Z]{2}-\\d{6}$"
description: >
Product SKU in format XX-000000.
quantity:
type: integer
minimum: 1
maximum: 999
idempotency_key:
type: string
format: uuid
description: >
Optional. If provided, prevents duplicate order
creation. Subsequent requests with the same key
return the existing order.
responses:
'201':
description: Order created successfully
'400':
description: >
Invalid request. Check items array (must have 1-100
items), SKU format, and quantity ranges.
'404':
description: Customer not found or inactive
'409':
description: >
Order with this idempotency_key already exists.
Returns the existing order.
The second version leaves nothing to interpretation. An agent reading this spec knows exactly what to send, what constraints to respect, and how to handle every response code. Every field has a type, format, constraints, and a description that explains its purpose—not just its name.
The connection to MCP tool descriptions
When APIs are exposed to agents through MCP servers, the tool description becomes the agent’s primary interface. The same principles apply—explicit semantics, complete schemas, clear error documentation—but the format changes from OpenAPI to MCP’s tool definition:
{
"name": "create_order",
"description": "Creates a new order for a customer. Requires at least one item with a valid SKU (format: XX-000000) and quantity (1-999). Returns the created order with assigned order_id. Use idempotency_key to prevent duplicate orders on retry.",
"inputSchema": {
"type": "object",
"required": ["customer_id", "items"],
"properties": {
"customer_id": {
"type": "string",
"description": "UUID of an existing, active customer"
},
"items": {
"type": "array",
"description": "Order items (1-100)",
"items": {
"type": "object",
"required": ["sku", "quantity"],
"properties": {
"sku": { "type": "string" },
"quantity": { "type": "integer" }
}
}
}
}
}
}
The MCP tool description is what the LLM actually reads when deciding whether and how to call a tool. A vague description like “Creates an order” gives the model insufficient context for parameter selection. A specific description that includes constraints, expected formats, and edge case behavior leads to dramatically better tool use accuracy.
This is where API design and agent design converge. The quality of your API’s schema and documentation directly determines how well agents interact with it—whether through direct API calls or through MCP tool wrappers.
Testing for agent consumption
Before declaring an API agent-ready, test it with actual agents. But go beyond a simple “can the agent use it” check:
Start with schema completeness: give an agent only your OpenAPI spec (no additional context) and ask it to complete a task. Observe where it fails, guesses, or asks for clarification. Every failure is a gap in your spec.
Then deliberately trigger each error response and verify the agent can interpret the error and take appropriate action—retry, modify the request, or escalate. If your error responses don’t give the agent enough information to recover, they’re not agent-ready.
Test boundary conditions: maximum array lengths, minimum values, null fields, empty strings. Agents will encounter these in production. If your spec doesn’t document them, the agent’s behavior at the boundaries will be undefined.
Finally, test multi-step workflows—sequences of API calls that accomplish a business task. Can the agent chain operations correctly? Does it handle the output of one call as input to the next? Are the response schemas consistent enough for the agent to navigate?
The goal is an API that an agent can consume successfully with zero additional context beyond the specification itself.
Implementation considerations
Invest in descriptions, not just schemas. JSON Schema validates structure, but descriptions convey intent. Both matter, and in practice, improving the quality of description fields across your OpenAPI spec has the highest impact on agent success rates.
Version your schemas strictly. Agents don’t adapt to schema changes the way human developers do. A field that silently changes from a string to an integer will break agent workflows without any obvious error. Semantic versioning for API contracts isn’t optional in an agentic world—it’s essential.
The discipline of explicit schemas, comprehensive error handling, and strict versioning that agents require is the same discipline that API governance programs enforce. Designing for agents and building governance maturity turn out to be the same work.
The payoff
Agent-ready APIs aren’t just better for AI—they’re better for humans too. The discipline of explicit semantics, comprehensive examples, and clear error handling benefits everyone who consumes your API. The difference is that humans can work around gaps in your spec. Agents can’t.
Start with your most critical APIs—the ones agents are most likely to consume. Make them agent-ready. Then use that as the template for everything else.