Remote OpenClaw

Remote OpenClaw Blog

Multi-Agent Communication Patterns for OpenClaw Teams

8 min read ·

When you run multiple OpenClaw agents, the question is not whether they need to communicate. It is how. A research agent that discovers relevant information needs to pass it to your executive agent. A monitoring agent that detects a calendar conflict needs to notify the scheduling agent. An intake agent that receives a Telegram message needs to route it to the right specialist.

The communication pattern you choose determines how reliable, scalable, and debuggable your multi-agent deployment will be. Choose well and your agents coordinate seamlessly. Choose poorly and you spend your time debugging lost messages, duplicated tasks, and race conditions.

This guide covers the four communication patterns that work in production OpenClaw deployments, when to use each one, and how to implement them.

Why Communication Patterns Matter

In a single-agent deployment, the agent talks to external services (Gmail, Calendar, Telegram) and talks to you. Communication is one-dimensional.

In a multi-agent deployment, agents also need to talk to each other. This creates a communication graph that grows in complexity with each agent you add. Two agents have one possible communication path. Three agents have three. Ten agents have 45. Without a structured communication pattern, this graph becomes an unmaintainable mess.

The right communication pattern provides three things: reliability (messages are not lost), ordering (messages arrive in the sequence they were sent), and decoupling (one agent failing does not break communication for other agents).

Pattern 1: File-Based Communication

The simplest communication pattern. Agents share a Docker volume, and they communicate by writing and reading files from that shared directory.

# docker-compose.yml
volumes:
  shared-comms:

services:
  agent-intake:
    volumes:
      - shared-comms:/app/shared
  agent-executive:
    volumes:
      - shared-comms:/app/shared

The intake agent writes a JSON file to /app/shared/tasks/ when it receives a new task. The executive agent watches that directory and picks up new files.

{
  "id": "task-20260406-001",
  "source": "telegram",
  "type": "research_request",
  "payload": "Find Q1 2026 revenue data for Stripe",
  "created_at": "2026-04-06T09:15:00Z",
  "status": "pending"
}

Advantages: Zero additional infrastructure. Works on any VPS. Easy to debug by inspecting files directly. Great for getting started with multi-agent communication.

Disadvantages: File system watching is not instant (polling introduces latency). No built-in ordering guarantees when multiple files arrive simultaneously. Race conditions are possible if two agents try to process the same file. Does not scale beyond 3-4 agents with moderate message volume.

Best for: Two-agent setups, low-frequency communication (fewer than 100 messages per hour), getting started before adding infrastructure.

Pattern 2: HTTP Direct Messaging

Each OpenClaw agent exposes an HTTP API. Agents communicate by making HTTP requests directly to each other's endpoints.

# Agent configuration
api:
  enabled: true
  port: 3001
  endpoints:
    - path: /tasks
      method: POST
      handler: receive_task

The sending agent makes a POST request to the receiving agent:

curl -X POST http://agent-executive:3001/tasks \
  -H "Content-Type: application/json" \
  -d '{"type": "research_request", "payload": "..."}'

Advantages: Near-instant delivery. Request/response model allows the sender to know immediately if the message was received. Easy to implement with OpenClaw's built-in HTTP skill.

Disadvantages: Tight coupling between agents. If the receiving agent is down, the message is lost unless the sender implements retry logic. Adding a new agent requires updating the configuration of every agent that needs to communicate with it. Does not support one-to-many broadcasting without explicit fan-out logic.

Best for: Sequential workflows where Agent A always hands off to Agent B. Low agent counts (2-4) where the communication graph is simple. Scenarios where immediate acknowledgment is important.

Pattern 3: Pub/Sub with Redis

The publish/subscribe pattern decouples senders from receivers. Agents publish messages to named channels, and other agents subscribe to the channels they care about. Redis provides the infrastructure.

# Agent configuration
messaging:
  backend: redis
  url: redis://openclaw-redis:6379
  publish:
    - channel: new_emails
    - channel: task_completed
  subscribe:
    - channel: new_emails
      handler: process_email
    - channel: calendar_conflicts
      handler: resolve_conflict

When the email agent receives a new email, it publishes to the new_emails channel. The calendar agent and the task agent both subscribe to this channel and independently process the email for their respective domains.

Advantages: Complete decoupling. Publishers do not know or care who subscribes. Adding a new agent that reacts to emails requires zero changes to the email agent. Supports one-to-many broadcasting naturally. Redis handles message routing efficiently.

Disadvantages: Messages are fire-and-forget in basic Redis pub/sub. If a subscriber is offline when a message is published, it misses the message. For guaranteed delivery, use Redis Streams instead of basic pub/sub. Adds Redis as an infrastructure dependency.

Best for: Event-driven architectures. Scenarios where multiple agents need to react to the same event. Deployments with 5+ agents where the communication graph would be unmanageable with direct messaging.

For workflow-level patterns that build on pub/sub, see the operator workflows guide.

Pattern 4: Shared State Store

Instead of passing messages, agents read and write to a shared state store. This is less about explicit communication and more about implicit coordination through shared data.

# Agent configuration
state:
  backend: redis
  url: redis://openclaw-redis:6379
  namespace: openclaw:state
  sync_interval: 5

All agents read from and write to the same state store. The email agent updates the inbox_summary key. The executive agent reads inbox_summary to decide priorities. The calendar agent reads today_schedule to check for conflicts.

Marketplace

Free skills and AI personas for OpenClaw — browse the marketplace.

Browse the Marketplace →

Advantages: Simple mental model. No explicit message passing. Agents just read the latest state when they need it. Works well for slow-changing data like daily summaries, configuration, and status dashboards.

Disadvantages: Race conditions when multiple agents write to the same key. No notification mechanism; agents must poll for changes. Not suitable for time-sensitive coordination where agents need to react immediately to events. Harder to trace the flow of information because there are no discrete messages to log.

Best for: Sharing reference data (schedules, summaries, configuration) across agents. Dashboard and monitoring use cases where agents update a shared status view. Combining with pub/sub for a hybrid approach where state changes trigger pub/sub notifications.

Choosing the Right Pattern

Most production multi-agent deployments use a combination of patterns rather than a single one. Here is a decision framework:

ScenarioRecommended Pattern
2 agents, simple handoffFile-based or HTTP direct
3-4 agents, sequential workflowsHTTP direct with retry logic
5+ agents, event-drivenRedis pub/sub (or Streams for guaranteed delivery)
Shared configuration and statusShared state store
Mixed workloads at scalePub/sub for events + shared state for reference data

The most common production pattern for operators with 5-10 agents is Redis pub/sub for event-driven task routing combined with shared state for reference data. This gives you the reactivity of event-driven communication with the simplicity of shared state for data that does not need real-time updates.

Preventing Duplicate Processing

When multiple agents subscribe to the same channel or pull from the same work queue, duplicate processing is inevitable without explicit prevention. Two agents see the same new email, both try to process it, and you end up with duplicate calendar events or duplicate task items.

The solution is distributed locking using Redis:

locking:
  backend: redis
  url: redis://openclaw-redis:6379
  ttl_seconds: 300
  retry_delay_ms: 100

When an agent picks up a task, it acquires a lock with the task ID as the key. Any other agent attempting to acquire the same lock receives a rejection and skips the task. The TTL ensures that if the processing agent crashes, the lock expires and another agent can retry.

For work queues specifically, use Redis Streams with consumer groups. Consumer groups guarantee that each message is delivered to exactly one consumer within the group, eliminating the need for application-level locking.

The multi-agent architecture guide covers the full technical detail of consumer groups and exactly-once semantics.

Real-World Agent Topologies

Based on deployments shared in the community, here are the three most common multi-agent topologies:

Hub-and-spoke. One central executive agent coordinates all others. Specialist agents (email, calendar, research, content) report to the executive agent, which makes routing decisions. This is the most common topology for personal productivity deployments with 3-5 agents.

Pipeline. Agents are arranged in a processing chain. Intake agent captures input, processing agent transforms it, action agent executes the result, notification agent confirms completion. Each agent hands off to the next. This works well for defined, repeatable workflows.

Mesh. Every agent can communicate with every other agent through a shared message broker. There is no central coordinator. Agents react to events and coordinate through shared state and pub/sub channels. This is the most flexible but hardest to debug and is typically used in enterprise deployments with 10+ agents.

For most operators getting started with multi-agent setups, the hub-and-spoke topology with the executive agent as the coordinator is the most practical starting point. It is easy to reason about, easy to debug, and easy to extend by adding new specialist agents without restructuring the entire system. See the multi-agent setup guide for step-by-step configuration of a hub-and-spoke deployment.


Frequently Asked Questions

What is the simplest way for two OpenClaw agents to communicate?

The simplest method is file-based communication through a shared volume. Agent A writes a JSON file to a shared directory, and Agent B watches that directory for new files. This requires no additional infrastructure beyond a shared Docker volume and works reliably for 2-3 agents with low-frequency communication. For higher throughput or more agents, upgrade to Redis pub/sub or HTTP-based messaging.

Should OpenClaw agents communicate directly or through a message broker?

For 2-4 agents, direct communication (HTTP calls between agents) is simpler and has lower latency. For 5+ agents, a message broker (Redis, NATS) is better because it decouples agents from each other, handles message persistence, and prevents tight coupling where one agent failure cascades to others. The broker pattern also makes it easier to add new agents without modifying existing ones.

How do I prevent two OpenClaw agents from processing the same task?

Use distributed locks via Redis. When an agent picks up a task, it acquires a lock with a unique task ID and a TTL (time-to-live). Other agents checking for that task see the lock and skip it. If the processing agent crashes, the TTL expires and another agent can pick up the task. This pattern is called exactly-once processing and is essential for any multi-agent deployment handling shared work queues.

What is the pub/sub pattern and when should I use it for OpenClaw agents?

Pub/sub (publish/subscribe) is a pattern where agents publish messages to named channels and other agents subscribe to channels they care about. Use it when multiple agents need to react to the same event (for example, a new email arrives and both the calendar agent and the task agent need to process it). Redis pub/sub is the most common implementation for OpenClaw multi-agent setups.