The discovery problem for agents is largely solved. A2A Agent Cards give agents a machine-readable identity. MCP gives them a standard way to bind tools. What remains unsolved is the definition problem: how do you capture everything an agent is—its purpose, capabilities, behavior, constraints, and operational requirements—in a single, portable, declarative format?

Today, building an agent means committing to a framework. Your agent’s logic lives in LangGraph state machines, or AutoGen configurations, or CrewAI YAML, or raw code. Switch frameworks, and you rewrite everything. Want to run the same agent on a different platform? Start over. Want a non-engineer to understand what the agent does? Good luck parsing the code.

This is the problem that several emerging specifications aim to solve: a declarative agent definition that captures the complete design of an agent and can be deployed to any compatible runtime. Define once, deploy everywhere.

Agent Description Languages Compared
Interactive comparison of the major agent definition specifications. Click each specification to explore its approach, coverage, and trade-offs.

What a portable agent definition must capture

The AI Agent Canvas provides a useful framework for what a complete agent definition needs to cover. The canvas identifies thirteen building blocks across four domains. A truly portable agent definition language needs to capture all of them declaratively—or explicitly acknowledge what it leaves out.

Purpose. Why does this agent exist? What problem does it solve, and for whom? This includes the agent’s mission and its consumers—the humans, systems, or other agents that interact with it.

Capabilities. What can this agent do? This is the largest and most complex domain: the tasks it performs, the skills (domain expertise and reasoning patterns) it brings, the tools it connects to, and the knowledge sources it draws from.

Operations. How does this agent work? What triggers it? What workflows does it follow? What context does it maintain across interactions? What outputs does it produce?

Governance. How is this agent controlled? What level of autonomy does it have? What guardrails constrain its behavior? What metrics define success?

A specification that covers all four domains declaratively—and can be consumed by multiple runtimes—would be the equivalent of what Kubernetes manifests did for container orchestration: define the desired state, let the platform figure out execution.

None of the current specifications achieve this completely. But they each take interesting runs at different parts of the problem.

Oracle Open Agent Specification

Oracle’s Open Agent Specification is the most ambitious attempt. Published in October 2025 with a 19-author technical report, it aims to be a complete, framework-agnostic representation of agent systems—not just individual agents, but multi-agent workflows with branching logic, parallel execution, and data flow.

The spec uses a component-based model serialized as JSON or YAML. The two top-level runnable types are Agents (conversational, ReAct-style) and Flows (directed workflows with nodes and edges). The JSON Schema defines roughly 180 component types across agents, LLM configurations, tools, flow nodes, edges, and transports.

# Oracle Agent Spec (simplified)
component_type: Agent
name: "Customer Support Agent"
system_prompt: |
  You are an expert in customer support.
  Please help the user with their requests.
llm_config:
  $component_ref: openai_gpt4
tools:
  - $component_ref: create_ticket_tool
  - $component_ref: order_lookup_flow
inputs:
  - title: customer_id
    type: string
outputs:
  - title: resolution
    type: string

Portability story. This is where Oracle makes its strongest case. The spec ships with runtime adapters for LangGraph, AutoGen, and CrewAI, plus a reference runtime (WayFlow). A Python SDK (pyagentspec) lets you define an agent programmatically and export it to any supported framework. The ONNX analogy is intentional: define the agent once, run it on any compatible runtime.

Canvas coverage. Oracle covers capabilities deeply—tasks, tools (including first-class MCP integration via MCPTool, MCPToolBox, and MCPToolSpec), and typed inputs/outputs. Operations are strong too: Flows model workflows with branching, parallel execution, and data routing. Multi-agent patterns (Swarm, Manager-Workers) handle coordination across agents. Purpose gets basic metadata (name, description) but no structured way to express mission or consumer intent. Governance is the biggest gap—memory is a roadmap item, guardrails have no dedicated component, and there’s no way to express autonomy levels, constraints, or success metrics.

Where it falls short. The spec’s 180 types create a steep learning curve. It models how an agent executes in fine detail but not why it exists or what controls it. And it’s entirely Oracle-driven—Apache 2.0 licensed but without foundation governance.

Current status. Version 26.1.0 (January 2026). Approximately 305 GitHub stars. Active development with AG-UI (CopilotKit) and Arize Phoenix integrations.

Eclipse LMOS Agent Definition Language

Eclipse LMOS takes a radically different approach. Where Oracle models the execution architecture, LMOS ADL models agent behavior—the use cases an agent handles, expressed in structured Markdown that non-engineers can read and edit.

The ADL grew out of Deutsche Telekom’s production deployment of their Frag Magenta OneBOT assistant—one of Europe’s largest multi-agent enterprise systems processing millions of interactions. The design reflects a specific conviction: the people who understand the business domain should be able to define agent behavior without writing code.

---
id: "password_reset"
examples:
  - "I forgot my password"
  - "Can't log in to my account"
---

### UseCase: password_reset

#### Description
Customer has forgotten their password and needs to reset it.

#### Steps
- Ask for the registered email address.
- Send a password reset link via @send_reset_email()

#### Solution
Guide the customer through the password reset process.

#### Fallback Solution
If the customer cannot access email, escalate to higher-tier support.

Portability story. Limited. ADL is tightly coupled to the LMOS/ARC runtime. An ADL compiler translates use cases into system prompts, but there are no adapters for other frameworks. The portability isn’t between runtimes—it’s between the business domain and the technical implementation. That’s a different kind of portability, and arguably more valuable for the enterprise use cases LMOS targets, but it doesn’t achieve “deploy everywhere.”

Canvas coverage. ADL shines on capabilities—skills are modeled as use cases with rich conditional logic (<isBusinessCustomer>), tool invocations (@function_name()), and cross-references between use cases (#use_case_id). Purpose is implicit in the use case descriptions but not formally structured. Operations are partially covered: steps define workflows within use cases, examples serve as triggers, and fallback solutions handle alternative paths. Governance gets partial treatment through conditional constraints and static response enforcement (double-quoted strings force literal output), but autonomy levels and metrics are absent.

Where it falls short. The tight coupling to LMOS/ARC means “define once, deploy everywhere” doesn’t apply. Multi-agent orchestration lives in the platform layer, not the spec. There are no typed inputs or outputs, and no model configuration—the spec deliberately abstracts away the LLM layer entirely.

Current status. Version 0.1.0-SNAPSHOT. Repo created February 2026. Under the Eclipse Foundation with Deutsche Telekom as the primary contributor. Very early stage but backed by real production experience at scale.

NextMoca Agent Definition Language

NextMoca’s ADL focuses on the static, inspectable blueprint of an individual agent. Built by a Palo Alto startup founded by former Oracle, Adobe, and Microsoft product leaders, it’s the simplest of the three specifications—and the most explicit about what it doesn’t cover.

{
  "name": "creative_producer_agent",
  "description": "Generates images from text prompts",
  "role": "Creative Producer",
  "version": 1,
  "llm": "openai",
  "llm_settings": { "temperature": 0.7, "max_tokens": 2048 },
  "tools": [
    {
      "name": "generate_image",
      "description": "Generate an image from a text prompt",
      "category": "Image Generation",
      "parameters": [
        { "name": "prompt", "type": "string", "required": true }
      ],
      "invocation": { "type": "python_function" },
      "dependencies": ["google-genai"],
      "keys_schema": [
        { "name": "GEMINI_API_KEY", "key_type": "environment_variable" }
      ]
    }
  ],
  "rag": [
    { "id": "brand-docs", "rag_type": "doc", "location_type": "s3" }
  ]
}

Portability story. NextMoca ADL is deliberately runtime-agnostic—it defines what an agent is, not how it runs. The spec stops at the static definition and defers execution to other standards (MCP for tool invocation, A2A for communication). This makes it the most portable in theory—any runtime could consume the definition—but the least actionable in practice, because the spec contains no execution semantics. A runtime would need to make many interpretation decisions that the spec doesn’t prescribe.

Canvas coverage. Purpose gets basic treatment through name, description, and role fields. Capabilities are the strongest area: tools are modeled with explicit parameters, invocation types, dependency tracking, and credential schemas. RAG configuration is a first-class concept—unusual among these specs. Operations are absent: no triggers, no workflows, no context management, no output modeling. Governance gets more attention than the others through audit fields (created_at, updated_by, change notes with semantic versioning) and planned permission/sandbox support, but autonomy levels and metrics aren’t captured.

Where it falls short. No workflow orchestration. No multi-agent coordination. No behavioral specification beyond tool definitions. The spec describes the agent’s inventory—what it has—but not its behavior—what it does. That makes it closer to a system of record than a deployable definition.

Current status. Version 0.1.0 (December 2025). 96 GitHub stars. Early stage, actively seeking community feedback.

What about the rest of the landscape?

Several other efforts touch on agent definition, but none aim at the “define once, deploy everywhere” problem:

A2A Agent Cards solve discovery, not definition. A JSON document at /.well-known/agent-card.json advertises what an agent can do and how to reach it. Agent Cards are the equivalent of a DNS record and business card—essential infrastructure, but they say nothing about how the agent is built, what model it uses, or how it behaves. With 100+ partner companies and Linux Foundation governance, they’re the de facto minimum viable agent identity. But they’re explicitly not a definition language.

AGNTCY Open Agent Schema Framework (OASF) from Cisco, LangChain, and others adds a trust layer to agent descriptions through OCI containers and W3C Verifiable Credentials. It bridges A2A Agent Cards and MCP server descriptions, adding cryptographic “Agent Badges” for provable capabilities. This solves the trust problem (“can I believe what this agent claims?”) rather than the definition problem.

NLIP (Natural Language Interaction Protocol) from Ecma International takes the anti-schema approach: five formal standards (ECMA-430 through 434) define natural language as the universal interchange format between agents. No shared ontology required. This solves agent communication, not agent definition—you still need to build each agent separately.

Microsoft’s Declarative Agent Manifest (versions 1.2 through 1.6) comes closest to a portable definition within the Microsoft ecosystem. It specifies a Copilot agent’s instructions, capabilities, data sources, and actions in declarative JSON. It’s production-ready and genuinely deploy-from-manifest—but only to Microsoft 365 Copilot.

Evaluated against the Agent Canvas

The gap between what these specifications capture and what a complete agent definition requires is stark. Mapping each spec against the thirteen building blocks of the AI Agent Canvas:

Canvas BlockOracle Agent SpecEclipse LMOS ADLNextMoca ADL
MissionName + description onlyImplicit in use casesName, description, role
ConsumersNot modeledNot modeledNot modeled
TasksAgent + Flow componentsUse case definitionsNot modeled
SkillsSystem promptConditional logic, examplesNot modeled
ToolsDeep (Server/Client/Remote/MCP)@function() syntaxTyped with deps + creds
KnowledgeNot modeledNot modeledRAG config (first-class)
TriggersStartNode in FlowsExamples (implicit)Not modeled
WorkflowsFlows, nodes, edges, branchingSteps within use casesNot modeled
ContextRoadmap (memory not yet spec’d)Not modeledNot modeled
OutputsTyped output propertiesSolutions, fallbacksNot modeled
AutonomyNot modeledNot modeledNot modeled
GuardrailsNot modeledConditional constraints, static responsesPlanned (permissions, sandbox)
MetricsNot modeledNot modeledNot modeled

No specification covers more than half the canvas. Oracle comes closest on the capabilities and operations domains but ignores governance entirely. LMOS is the only spec that captures behavioral nuance (conditional logic, fallback paths, examples for intent matching) but can’t express operational structure. NextMoca captures the tooling inventory most precisely but has no model for what the agent actually does.

Most telling: no specification models autonomy, consumers, or metrics. These are among the most critical design decisions in enterprise agent deployments—and none of the current specs can express them.

What “define once, deploy everywhere” actually requires

The Kubernetes analogy is instructive. A Kubernetes manifest doesn’t describe how the container runtime works—it describes the desired state: which containers to run, what resources they need, how they should scale, and what health checks to perform. The platform handles everything else.

An equivalent agent definition would need to capture:

  1. What the agent is for (mission, consumers)—so platforms can route, prioritize, and govern it
  2. What the agent can do (tasks, skills, tools, knowledge)—so platforms can wire up the right capabilities
  3. How the agent operates (triggers, workflows, context, outputs)—so platforms can orchestrate execution
  4. How the agent is controlled (autonomy, guardrails, metrics)—so platforms can enforce constraints and measure success

The current specifications each nail one or two of these layers while ignoring the others. Oracle models execution deeply but skips governance. LMOS captures behavior accessibly but couples to a single runtime. NextMoca describes the inventory cleanly but doesn’t model behavior or operations.

The specification that wins won’t just be the most complete—it needs to be complete and portable and adoptable. Completeness without portability is a proprietary platform. Portability without completeness means every runtime fills in the gaps differently, and “deploy everywhere” becomes “debug everywhere.”

Where this needs to go

The industry needs convergence. Not necessarily into a single specification, but into complementary layers that compose cleanly:

A2A for discovery. This is solved. Agent Cards at well-known URLs, with authentication and capability advertisement.

A definition language for the full agent design. This is the missing piece. It needs the scope of the Agent Canvas—purpose, capabilities, operations, and governance—expressed declaratively, validated by schema, and consumable by any runtime. Oracle’s ambition, LMOS’s accessibility, and NextMoca’s simplicity each point at parts of the answer, but no one has assembled the whole picture yet.

MCP for tool binding. Also largely solved. The definition language references tools; MCP handles the runtime connection.

What’s clear is that “define once, deploy everywhere” for agents is not yet achievable. The specifications are young, the problem is genuinely hard, and the industry hasn’t converged on what a complete agent definition even contains. But the need is real. You can’t govern what you can’t describe. You can’t port what isn’t specified. And you can’t build an ecosystem of interoperable agents when every framework speaks a different language for what an agent is.

The race isn’t for an “OpenAPI for agents”—that framing understates the problem. It’s for a Kubernetes manifest for agents: a declarative definition that captures the complete desired state and lets any platform bring it to life.