Users are the human participants who interact with and oversee AI agent systems. They are the actors who initiate work, provide context, approve decisions, consume outputs, and ultimately determine whether an agentic system is delivering value. In the agentic primitives framework, users represent the human side of the equation—the people whose needs, judgment, and authority give agents their purpose and their boundaries.

It might seem odd to include users as a primitive in a framework about building AI agent systems. After all, users aren’t something you build—they already exist. But defining users explicitly as a primitive serves a critical purpose: it forces system designers to think carefully about the human roles in their agentic architecture before they start thinking about agents, tools, and workflows. Too many agentic systems are designed agent-first, with the human experience treated as an afterthought. The result is agents that are technically capable but practically unusable—or worse, agents that operate autonomously in contexts where human involvement was essential but never designed for.

Interactive visualization

The Roles Users Play

Users are not a monolithic group. Different humans interact with agentic systems in fundamentally different ways, and the design of the system needs to accommodate each role.

Requesters are the users who initiate work. They provide the objectives, constraints, and context that set agents in motion. A customer asking a support agent for help, an analyst requesting a research summary, a manager asking an agent to draft a report—these are all requesters. Their primary concern is communicating what they need clearly enough for the agent to deliver, and receiving results they can trust and act on.

Operators are the users who monitor and manage agent behavior at runtime. They watch dashboards, review agent decisions, intervene when something goes wrong, and ensure that agents are performing within expected parameters. An operations team monitoring a fleet of customer service agents, a compliance officer reviewing flagged transactions, a team lead checking the quality of agent-generated content—these are operators. Their primary concern is visibility into what agents are doing and the ability to intervene when needed.

Administrators are the users who configure and govern agent systems. They set permissions, define guardrails, configure tools, manage agent instructions, and establish the policies that shape agent behavior. A platform administrator setting up role-based access control, a governance lead defining system instructions, an architect configuring agent-to-tool connections—these are administrators. Their primary concern is ensuring that agents are properly configured, appropriately constrained, and aligned with organizational policies.

Approvers are the users who authorize agent actions that exceed autonomous boundaries. When an agent encounters a decision that requires human judgment—a transaction above a threshold, an exception to standard policy, a communication to an external party—the approver reviews the agent’s recommendation and gives the go-ahead or redirects. Their primary concern is making informed decisions quickly without becoming a bottleneck.

These roles aren’t mutually exclusive. A single person might be a requester in one interaction, an approver in another, and an operator in a third. The point isn’t to create rigid role assignments but to ensure that the agentic system is designed to support each mode of human participation effectively.

Users and the Autonomy Spectrum

The relationship between users and agents is defined largely by autonomy—how much independent authority the agent has, and where the human’s judgment is required.

At the low-autonomy end, users are deeply involved in every step. The human provides specific instructions, the agent executes, and the human reviews the result before anything happens. This is the human-in-the-loop model, and it’s appropriate for high-stakes, low-trust scenarios—early agent deployments, regulated processes, situations where errors are costly and irreversible.

At the mid-autonomy level, users set objectives and monitor outcomes, intervening only when the agent encounters situations outside its defined boundaries or when periodic review is warranted. This is the human-on-the-loop model. The agent operates independently for routine work, but the human maintains oversight and can step in when needed. Most enterprise agentic deployments today operate in this zone.

At the high-autonomy end, users define goals and constraints, and the agent operates independently within those boundaries. The human reviews aggregate results rather than individual decisions, and intervention is exception-based. This is full autonomy within defined scope, and it’s appropriate only for mature agents operating on well-understood tasks where the organization has established high trust.

The autonomy framework makes clear that this isn’t a single spectrum—it’s a multidimensional profile. An agent might operate with full autonomy for tool selection while requiring human approval for collaboration decisions. A user’s involvement varies not just by agent but by dimension, by task criticality, and by the organization’s operational maturity with agentic systems.

Designing for Users

Designing agentic systems for users means more than building a chat interface. It requires thinking through the complete human experience at every touchpoint.

Transparency is the foundation of user trust. Users need to understand what the agent is doing, why it’s doing it, and how confident it is. An agent that produces results without explanation—or worse, one that produces confident-sounding results with no indication of uncertainty—erodes trust over time. Effective agent design surfaces reasoning, highlights limitations, and makes it easy for users to verify the agent’s work when they choose to.

Appropriate interruption determines whether human-in-the-loop controls help or hinder. An agent that asks for approval on every minor decision creates fatigue—users start rubber-stamping approvals without actually reviewing them, defeating the purpose of human oversight. An agent that never escalates creates anxiety—users don’t know what the agent is doing and worry about unseen errors. The right design calibrates interruption to the significance of the decision, escalating meaningfully without overwhelming.

Feedback mechanisms close the loop between user experience and agent improvement. When an agent gets something wrong, users need an easy way to flag the error, explain what should have happened, and see that their feedback leads to improvement. Without feedback mechanisms, the same errors recur, user frustration grows, and trust degrades. With effective feedback mechanisms, agents improve over time and users develop confidence that the system is learning from its mistakes.

Progressive trust building acknowledges that the user-agent relationship evolves. Early interactions should be more transparent, more conservative, and more frequently punctuated by human checkpoints. As the agent demonstrates reliability, the interaction model can shift—fewer interruptions, more delegation, greater agent autonomy. This progression should be deliberate and visible, so users understand that the system is earning their trust through demonstrated competence rather than simply asserting it.

Users in Multi-Agent Systems

In multi-agent systems, the user’s role becomes more complex. Instead of interacting with a single agent, the user might be the requester who initiates a task that cascades through an orchestrator and multiple specialist agents, eventually receiving a synthesized result that reflects the work of agents they never directly interacted with.

This raises important questions about visibility and accountability. When a multi-agent system produces an incorrect result, which agent was responsible? The orchestrator that decomposed the task? The specialist that produced flawed analysis? The agent that synthesized outputs without catching the error? Users need enough visibility into the multi-agent process to understand what happened and where things went wrong—without being forced to trace through every delegation and every specialist interaction to find the answer.

Effective multi-agent design provides users with tiered visibility. At the surface level, the user sees the final result and a high-level summary of how it was produced. At the detail level, the user can drill into specific steps, specialist outputs, and decision points. This layered approach respects the user’s time while preserving the ability to investigate when investigation is warranted.

Why Users Matter as a Primitive

Defining users as an explicit primitive in the agentic framework serves several purposes that improve system design.

It centers design on human needs. When users are a first-class primitive, the question “what do users need from this system?” is asked at the same time as “what should this agent do?"—not as an afterthought after the agent is built.

It forces explicit role definition. Identifying which humans participate in the system, in what roles, and at what touchpoints prevents the common failure mode where agent autonomy exceeds what any human intended because nobody specified where the boundaries should be.

It creates governance accountability. When users are defined in the architecture—with their roles, permissions, and escalation paths—there’s a clear record of who is responsible for what. This is essential for compliance in regulated industries and for organizational confidence in agentic systems generally.

It connects to the autonomy framework. The autonomy dimensions—task, tool, plan, collaboration—are meaningless without reference to the users whose trust, oversight, and authority define where autonomy boundaries sit. Users are one half of the autonomy equation; agents are the other.

Also Known As

Users appear under various names depending on the platform and context. You’ll encounter them as human actors (in formal system design), end users (when referring specifically to those who consume agent outputs), operators (when referring to those who manage agent systems), stakeholders (in broader organizational contexts), principals (in security and authorization contexts, referring to the human whose authority the agent acts under), or human-in-the-loop participants (in autonomy and oversight discussions). In some multi-agent frameworks, users are called human agents to emphasize that humans and AI agents are both participants in the same system. The defining characteristic across all terminology is the same: the human beings whose needs, judgment, and authority shape how agentic systems operate.

Key Takeaways

Users are the human participants whose needs give agents their purpose and whose authority defines agent boundaries. They participate as requesters, operators, administrators, and approvers—roles that aren’t mutually exclusive but that each require different design considerations. The user-agent relationship is defined by autonomy—how much independence the agent has and where human judgment is required—and this relationship evolves over time as trust is established through demonstrated competence. In the agentic primitives framework, users sit alongside agents as the two actor primitives, and explicitly defining them forces the human-centered design thinking that distinguishes effective agentic systems from technically impressive but practically unusable ones.