MCP standardizes how agents connect to tools. But it deliberately leaves out a critical question: how does an agent—or the platform managing agents—find the right MCP server in the first place?

For a single developer running a local MCP server via stdio, discovery isn’t a problem. You configure it manually and move on. But in an enterprise with dozens of MCP servers across teams, or in an ecosystem where third-party tool providers publish servers for anyone to use, manual configuration doesn’t scale. You need a registry.

The official MCP Registry, launched in preview in September 2025 and backed by Anthropic, GitHub, Microsoft, and PulseMCP, provides exactly this: a centralized metadata catalog for publicly available MCP servers, with a standardized API that downstream tools and private registries can build on.

MCP Registry Architecture
Explore how MCP servers are discovered, published, and verified. Click each component to see how it works.

What the Registry is (and isn’t)

The MCP Registry is a metadata catalog, not a package repository. It stores information about MCP servers—names, descriptions, installation instructions, transport types, configuration requirements—but not the server code itself. The actual packages live where they always have: npm, PyPI, Docker Hub, or as remote HTTP endpoints.

Think of it as a phone book. It tells you what’s available and how to reach it. It doesn’t host the service itself.

This separation is intentional. Package registries already handle versioning, security scanning, and artifact distribution. The MCP Registry adds the discovery layer on top without duplicating that infrastructure.

The server.json format

Every server in the registry is described by a server.json metadata document. Here’s what a real entry looks like:

{
  "$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
  "name": "io.github.acme-corp/order-management",
  "description": "Query, create, and manage customer orders. Supports filtering by status, date range, and customer ID. Returns order summaries with line items and fulfillment state.",
  "repository": {
    "url": "https://github.com/acme-corp/mcp-order-server",
    "source": "github"
  },
  "version": "2.1.0",
  "packages": [
    {
      "registryType": "npm",
      "identifier": "@acme-corp/mcp-order-server",
      "version": "2.1.0",
      "transport": {
        "type": "stdio"
      },
      "environmentVariables": [
        {
          "name": "ORDERS_API_KEY",
          "description": "API key for the orders service",
          "isRequired": true,
          "isSecret": true,
          "format": "string"
        }
      ]
    }
  ]
}

The format captures what a client or platform needs to install and configure a server without ambiguity: the package manager, the transport type, and the required environment variables. The packages array supports multiple deployment options for the same server—an npm package for stdio, a Docker image for containerized deployments, or a remote URL for Streamable HTTP.

Namespace verification and trust

The most consequential design decision in the MCP Registry is its namespace system. Server names follow a reverse-DNS format that is cryptographically tied to the publisher’s identity:

Authentication MethodName FormatVerification
GitHubio.github.username/*OAuth device flow via GitHub
DNScom.example/*TXT record with Ed25519 or ECDSA public key
HTTPcom.example/*.well-known/mcp-registry-auth file with public key

This means io.github.anthropic/mcp-server can only be published by someone who controls the anthropic GitHub account. com.stripe/payments can only be published by someone who controls the stripe.com domain. You can’t squat on a namespace you don’t own.

The verification works by tying the publisher’s identity to a cryptographic key pair. For DNS authentication, the flow is:

  1. Generate an Ed25519 or ECDSA key pair
  2. Publish the public key as a DNS TXT record on your domain
  3. The mcp-publisher CLI signs publish requests with your private key
  4. The registry verifies the signature against the DNS record

This is a significant improvement over package registries like npm, where namespace squatting has been a persistent security problem. The MCP Registry makes it structurally impossible to impersonate another organization’s servers.

The API

The registry exposes a REST API for discovering servers. The primary endpoint:

curl "https://registry.modelcontextprotocol.io/v0/servers?search=database&limit=5"

Returns:

{
  "servers": [
    {
      "server": {
        "name": "io.github.acme-corp/order-management",
        "description": "Query, create, and manage customer orders...",
        "version": "2.1.0",
        "packages": [ ... ]
      },
      "_meta": {
        "io.modelcontextprotocol.registry/official": {
          "status": "active",
          "publishedAt": "2026-01-15T10:30:00Z",
          "updatedAt": "2026-02-01T14:22:00Z",
          "isLatest": true
        }
      }
    }
  ],
  "metadata": {
    "count": 1
  }
}

The API supports pagination via cursor and limit parameters, and keyword search via search. It’s deliberately simple—the registry provides the raw catalog, and leaves richer search, filtering, and curation to downstream consumers.

The ecosystem model

The registry is designed as a hub in a federated ecosystem, not as the single point of access:

The official registry is the authoritative source for publicly available MCP servers. Server developers publish here using the mcp-publisher CLI tool.

Downstream aggregators (marketplaces, IDE plugin stores, platform catalogs) pull from the official registry and enrich the data with ratings, reviews, security audits, usage statistics, or editorial curation. The registry’s API is intended for periodic bulk consumption by these aggregators, not for real-time queries from individual clients.

Private registries implement the same OpenAPI specification but serve internal servers that shouldn’t be public. An enterprise might run a private registry that combines its internal MCP servers with a curated subset from the official registry—applying its own governance policies, access controls, and security requirements.

Host applications (IDEs, chat interfaces, agent platforms) don’t consume the official registry directly. They integrate with downstream aggregators or private registries that have applied the appropriate curation and security layers for their context.

This layered model is important for enterprises. The official registry is a public catalog—it can’t enforce your organization’s security policies, access controls, or approval workflows. Those concerns belong in the private registry or aggregator layer that sits between the public catalog and your agents.

What the Registry doesn’t provide

Runtime discovery. The registry is a build-time and configuration-time resource, not a runtime service. An agent doesn’t query the registry mid-conversation to find a tool. The platform or host application resolves server metadata ahead of time and configures the agent’s available tools.

Quality signals. The registry doesn’t rate, rank, or review servers. It stores metadata and verifies namespace ownership. Whether a server is reliable, well-maintained, or secure is left to downstream aggregators and platform operators to assess.

Security scanning. The registry delegates security to the underlying package registries (npm, PyPI, Docker Hub) and to downstream aggregators. It doesn’t inspect server code for vulnerabilities, malicious behavior, or supply chain risks.

Private server support. The official registry is public only. Servers on private networks, internal package registries, or behind VPNs aren’t supported. Enterprises need private registries for internal tooling—the OpenAPI spec is available for this purpose, but you have to build or operate the infrastructure yourself.

Capability introspection. The registry stores what the publisher declares in server.json, but it doesn’t verify those claims at runtime. A server that claims to support tools and resources might actually only implement tools. Verification happens when the MCP client connects and performs capability negotiation—the registry operates on trust in the publisher’s declarations.

Enterprise considerations

Plan for a private registry from day one. Even if you consume public servers, you’ll have internal MCP servers that shouldn’t be in a public catalog. A private registry that implements the MCP Registry OpenAPI spec gives you a unified interface for both internal and external server discovery, with your governance policies applied consistently.

Treat the registry as your MCP server catalog, not your tool catalog. A single MCP server can expose dozens of tools. The registry tells you about servers; your MCP client’s capability negotiation tells you about the tools within those servers. These are separate layers of discovery.

Use namespace verification to your advantage. If your organization publishes MCP servers, register your domain namespace early. DNS-based verification ties your server identity to infrastructure you already control—no dependency on GitHub account ownership.

Build an approval workflow for new servers. The registry makes it easy to discover servers. That’s a feature and a risk. Not every public MCP server should be allowed to connect to your enterprise systems. Your private registry or aggregator should implement an approval process before making a new server available to your agents.

Monitor the spec stability. The registry API is currently at v0 and in preview. The API entered a stability freeze in late 2025, but breaking changes are still possible before general availability. Build your integrations with this in mind—abstract the registry API behind an internal interface that can absorb upstream changes.

The bigger picture

The MCP Registry solves a real gap in the protocol ecosystem. MCP standardizes how agents connect to tools; the registry standardizes how they find them. Without a discovery mechanism, the MCP ecosystem would fragment into walled gardens where each platform maintains its own incompatible server catalog.

The federated design—a public authoritative registry, a standardized OpenAPI spec, and an expectation that private registries and aggregators will sit between the catalog and end users—is architecturally sound. It mirrors how package ecosystems like npm work, with the addition of cryptographic namespace verification that the package world has struggled to implement.

For enterprises, the registry is infrastructure to build on, not a finished product to adopt. The public catalog handles discovery. Your private registry handles governance. Your aggregator handles curation. And your agent platform handles the runtime connection. Each layer has a clear role—and the registry’s deliberate minimalism makes that layering possible.