Anthropic’s Model Context Protocol (MCP) has become the dominant standard for connecting AI agents to external tools and data sources. Adopted by OpenAI, Google, Microsoft, and dozens of tool vendors, MCP standardizes the “last mile” between a language model and the systems it needs to act on.
But like any protocol, the gap between what MCP specifies and what production systems require is significant. Understanding that gap—clearly, precisely—is what separates successful enterprise adoption from integration pain.
What MCP provides
MCP defines a stateful, JSON-RPC 2.0 based communication protocol between three roles: hosts (LLM applications that initiate connections), clients (connectors within the host), and servers (services that provide capabilities). The architecture is inspired by the Language Server Protocol that standardized IDE tooling—MCP aims to do the same for AI tool integration.
Server features
MCP servers expose three categories of capabilities to clients:
The primary integration surface is tools. A tool is a function the AI model can invoke—querying a database, calling an API, running a computation. Each tool is defined with a name, description, and a JSON Schema for its inputs. Here’s what a tool definition looks like on the wire:
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/list"
}
The server responds with the available tools and their schemas:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"tools": [
{
"name": "query_orders",
"description": "Search orders by customer ID, date range, or status. Returns order summaries including total, item count, and fulfillment state. Use status 'pending' to find orders awaiting processing.",
"inputSchema": {
"type": "object",
"properties": {
"customer_id": {
"type": "string",
"description": "Customer UUID"
},
"status": {
"type": "string",
"enum": ["pending", "processing", "shipped", "delivered", "cancelled"]
},
"limit": {
"type": "integer",
"default": 10,
"maximum": 100
}
},
"required": ["customer_id"]
}
}
]
}
}
Notice that the tool description isn’t just “search orders”—it explains what the tool returns and when to use specific parameters. This is where agent experience (AX) meets protocol design. The quality of your tool descriptions directly determines how well agents use them.
Invoking a tool follows the same JSON-RPC pattern:
{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "query_orders",
"arguments": {
"customer_id": "cust_8f3a2b",
"status": "pending"
}
}
}
Tool results can return text, images, audio, resource links, or structured JSON content—giving servers flexibility in how they communicate results back to the model.
Resources provide context and data that the model or user can reference—files, database records, API responses, live data feeds. Resources are identified by URIs and can be subscribed to for change notifications. Where tools represent actions, resources represent information.
Servers also expose prompts: templated messages and workflows that encapsulate complex multi-step interaction patterns—“analyze this codebase” or “generate a report from these metrics”—as reusable, parameterized workflows.
Client features
The protocol isn’t one-directional. MCP clients offer three capabilities back to servers:
Sampling allows servers to request LLM completions through the client, enabling recursive and agentic patterns where a tool server can ask the model to reason about intermediate results before continuing.
Servers can also query roots—the URI or filesystem boundaries they should operate within—enabling context-aware behavior.
Elicitation completes the picture: servers can request additional information from users through the client, enabling human-in-the-loop patterns during tool execution.
Transport
MCP defines two standard transports:
| Aspect | stdio | Streamable HTTP |
|---|---|---|
| Deployment | Client launches server as subprocess | Server runs independently, handles multiple clients |
| Communication | JSON-RPC over stdin/stdout | HTTP POST for client→server, optional SSE for streaming |
| Best for | Local tools, IDE integrations, development | Remote servers, production deployments, cloud services |
| Session management | Implicit (process lifetime) | Explicit via MCP-Session-Id header |
| Resumability | Not applicable | SSE event IDs enable reconnection and message replay |
The Streamable HTTP transport replaced the earlier HTTP+SSE transport in the 2025-11-25 specification. It uses a single HTTP endpoint that accepts POST requests for client messages and optionally returns SSE streams for server responses—a significant simplification over the previous dual-endpoint design.
Capability negotiation
At connection time, clients and servers exchange capability declarations. A server that only exposes tools doesn’t need to implement resources or prompts. A client that doesn’t support sampling simply doesn’t advertise it. This negotiation enables graceful degradation and keeps simple integrations simple.
How MCP has evolved
MCP has matured significantly since its initial release:
The initial release (2024-11-05) established the core protocol with tools, resources, prompts, and sampling. It shipped with HTTP+SSE transport alongside stdio but had no authentication story.
The March 2025 release introduced OAuth 2.1 authorization, enabling proper identity flows for remote MCP servers, and added the Streamable HTTP transport to replace the dual-endpoint HTTP+SSE design. This was the release that made remote, production-grade MCP deployments viable.
The current specification (2025-11-25) added structured tool outputs with output schemas, tool annotations for metadata, elicitation for human-in-the-loop flows, and session management improvements. It also deprecated the old HTTP+SSE transport.
The trajectory is clear: MCP started as a local protocol for connecting IDEs to tools and has systematically added the features needed for remote, multi-tenant, production deployments.
What MCP doesn’t provide
Understanding the gaps is just as important as understanding the features:
MCP doesn’t define how to find available servers. There’s no registry, no directory service, no equivalent of DNS for tool servers. In enterprise environments with dozens or hundreds of MCP servers, you’ll need to build or buy a discovery layer.
Orchestration is also out of scope. MCP connects one client to one or more tool servers, but multi-agent coordination, task decomposition, workflow management, and cross-agent communication are left to protocols like A2A.
Governance—rate limiting, cost tracking, audit logging, compliance enforcement, usage quotas—is similarly absent from the protocol. MCP tells you how to call a tool, not whether you should be allowed to.
While MCP now supports OAuth 2.1 for authenticating connections, it doesn’t define fine-grained authorization at the tool or resource level. Whether a specific agent can invoke a specific tool with specific parameters is an application-level concern.
Error semantics are minimal as well. MCP distinguishes protocol errors (malformed requests) from tool execution errors (the tool ran but failed), but doesn’t standardize error categories or recovery strategies. When a tool returns isError: true, the meaning of that error—and what to do about it—is left to implementations.
Implementation considerations
Start with stdio for development, then plan for Streamable HTTP in production. Stdio is simpler to set up and debug, but production deployments need remote access, multi-client support, and proper authentication—all of which require Streamable HTTP.
Invest heavily in tool descriptions. The description field on your tools is the primary interface between the model and your system. A vague description like “manages orders” leads to misuse. A precise description that explains inputs, outputs, side effects, and when to use this tool vs. alternatives is the difference between a reliable agent and a frustrating one.
Because MCP deliberately stays out of governance, you need to build a governance layer around it. An API gateway, a management platform, or custom middleware should handle authentication, rate limiting, audit logging, and access control for your MCP servers. This is the right architectural separation—governance requirements vary wildly across enterprises—but it means the responsibility falls on you.
Tool definitions will change over time. Input parameters get added, output formats evolve, tool behavior shifts. Build versioning into your deployment strategy from day one. The tools/list_changed notification helps clients detect changes, but managing the transition is your responsibility.
Finally, instrument your MCP servers with the same rigor you’d apply to any production API. Every tool invocation is a potential failure point, cost center, and security surface—track latency, error rates, token consumption, and invocation patterns.
The bigger picture
MCP is to AI agents what HTTP is to web applications—essential infrastructure that enables everything else, but not sufficient on its own. It standardizes the critical connection between models and tools, and it’s doing so with increasing sophistication as the spec matures.
The organizations that succeed with MCP will be those that understand its role clearly. It’s not a platform. It’s not a framework. It’s a protocol—a shared language for tool integration that frees you to focus on the harder problems of governance, orchestration, and reliable agent behavior.
Build your tools well. Describe them precisely. Govern them carefully. The protocol will do its job if you do yours.