Multi-agent coordination for AI coding

Break big tasks into small ones. Spawn agents to work in parallel. Learn from what works.

"With event sourcing, you can design an event such that it is a self-contained description of a user action."

— Martin Kleppmann, Designing Data-Intensive Applications

The Problem

You ask your AI agent to "add OAuth authentication." Five minutes later, it's going down the wrong path. Or touching files it shouldn't. Or making changes that conflict with your other session.

AI agents are single-threaded, context-limited, and have no memory of what worked before.

The Solution

  • Break tasks into pieces that can be worked on simultaneously
  • Spawn parallel workers that don't step on each other
  • Remember what worked and avoid patterns that failed
  • Survive context death without losing progress

That's what Swarm does.

Parallel Execution

Break tasks into subtasks, spawn workers that run simultaneously

Git-Backed Tracking

Cells stored in .hive/, synced with git, survives sessions

Agent Coordination

File reservations prevent conflicts, agents communicate via Swarm Mail

Learning System

Patterns that work get promoted, failures become anti-patterns

1

Install & Setup

npm install -g opencode-swarm-plugin@latest
swarm setup

Setup configures OpenCode, checks dependencies, and migrates any existing data automatically.

Optional: Semantic Memory

For persistent learning across sessions, install Ollama (uses libSQL for embedded storage):

brew install ollama
ollama serve &
ollama pull mxbai-embed-large
2

Run Your First Swarm

In any OpenCode session, use the /swarm command:

/swarm "add user authentication with OAuth"

That's it. The coordinator analyzes the task, breaks it into subtasks, spawns parallel workers, and tracks everything in git-backed work items.

3

What Happens Under the Hood

Task Analyzed

Coordinator queries past solutions (CASS), picks a strategy (file-based, feature-based, or risk-based)

Cells Created

Epic + subtask cells created atomically in .hive/, tracked in git

Workers Spawn

Parallel agents start, each gets a subtask + shared context

Files Reserved

Workers reserve files before editing, preventing conflicts

Work Completed

Workers finish, auto-release reservations, store learnings

Learning Recorded

Outcome tracked: fast + success = proven pattern, slow + errors = anti-pattern

4

Essential Tools

ToolPurpose
/swarm "task"Decompose and parallelize a task
hive_ready()Get next unblocked cell
hive_sync()Sync to git (MANDATORY before ending)
swarmmail_reserve()Reserve files before editing
skills_use()Load domain expertise into context
semantic-memory_store()Save learnings for future sessions

40+ tools available. See full reference.

5

Session Workflow

🚀

Start

hive_ready()

What's next? Get the highest priority unblocked cell.

Work

/swarm "task"

Use tools, reserve files, coordinate with other agents.

End (MANDATORY)

hive_sync() + git push

The plane is not landed until git push succeeds.

CLI Commands

swarm setup     # Install and configure (run once)
swarm doctor    # Check dependencies and health
swarm init      # Initialize hive in current project
swarm config    # Show config file paths
swarm migrate   # Migrate legacy databases