Skip to content

Multi-agent orchestrator configuration for OpenCode — 7 specialized agents with persistent memory via megamemory

License

Notifications You must be signed in to change notification settings

0xK3vin/OpenCodeOrchestrator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OpenCode Orchestrator

Route every task to a specialist instead of forcing one assistant to do everything.

Views

InstallArchitectureWorkflowsConfigureDocs


Quick Start

curl -fsSL https://raw.githubusercontent.com/0xK3vin/OpenCodeOrchestrator/main/install.sh | bash

Updating

Re-running the installer is update-safe by default:

  • Existing agent model: values are preserved automatically.
  • If agent prompt body text was customized, the installer prompts you to resolve each conflict (overwrite, skip, or view diff).
  • In non-interactive environments (for example curl ... | bash in CI), agent prompt conflicts default to upstream content while preserving your model: values and keeping backups.

Use --force for a clean overwrite of all installed files:

curl -fsSL https://raw.githubusercontent.com/0xK3vin/OpenCodeOrchestrator/main/install.sh | bash -s -- --force

Configure models (optional):

curl -fsSL https://raw.githubusercontent.com/0xK3vin/OpenCodeOrchestrator/main/configure.sh | bash

Install from a local clone (uses local files, including unpushed changes):

git clone https://github.com/0xK3vin/OpenCodeOrchestrator.git
cd OpenCodeOrchestrator
./install.sh --local

The Problem

A single general-purpose agent can do a little bit of everything, but it usually does none of it exceptionally well. Planning, implementation, debugging, operations, and review need different constraints and different strengths. One prompt and one permission set for all tasks creates inconsistent quality and unnecessary risk.

The Solution

OpenCode Orchestrator uses role-specialized agents with focused prompts, scoped permissions, and model tiering for cost/capability balance. The orchestrator delegates to the right specialist — sequentially or in parallel — chains workflows automatically, and uses persistent megamemory context across sessions. Free MCP servers provide web search, GitHub code search, and project memory without API keys. You get better outcomes, lower operational risk, and less prompt micromanagement.

Architecture

Architecture

Key Benefits

  • Specialized agents: Each agent has a focused prompt tuned to one job, scoped permissions (e.g., build can edit while plan cannot), and a model selected for workload fit.
  • Intelligent routing: The orchestrator picks the right specialist automatically. Complex features go plan -> build; unclear failures go debug -> build.
  • Review loop: Non-trivial changes flow through review. If issues are found, they return to build until the quality gate passes.
  • Parallel delegation: The orchestrator dispatches independent workstreams simultaneously — e.g., two explore tasks or plan + explore in parallel — then synthesizes results.
  • Free MCP servers: Web search (exa), GitHub code search (grep_app), and persistent memory (megamemory) work out of the box with no API keys.
  • Persistent memory via megamemory: A project knowledge graph survives across sessions; the orchestrator queries before work and records after work.
  • One-line install: curl -fsSL https://raw.githubusercontent.com/0xK3vin/OpenCodeOrchestrator/main/install.sh | bash

Workflow Examples

Workflow

Simple code change

You: "Add a loading spinner to the dashboard"
-> orchestrator -> build -> done

Complex feature (plan -> build -> review)

You: "Add real-time notifications with WebSocket support"
-> orchestrator -> plan (architecture spec)
                 -> build (implements following plan)
                 -> review (verifies correctness)
                 -> PASS ✓

Bug with unclear cause (debug -> build -> review)

You: "The checkout flow returns 500 errors intermittently"
-> orchestrator -> debug (traces execution, finds race condition)
                 -> build (implements fix)
                 -> review -> PASS ✓

Review loop (issues found)

-> review finds missing null check
-> build fixes it
-> review again -> PASS ✓

Codebase question

You: "How does the auth middleware work?"
-> orchestrator -> explore (read-only analysis with file:line refs)

Deployment

You: "Deploy to staging"
-> orchestrator -> devops (verifies build, deploys, reports rollback procedure)

Parallel research

You: "Compare our auth implementation against industry best practices"
-> orchestrator -> explore (reads local auth code)     } parallel
               -> explore (searches web via exa)       }
               -> synthesizes findings into recommendation

Megamemory Integration

megamemory is a persistent knowledge graph for your project: features, architecture, patterns, and decisions. It gives your agents memory across sessions.

Workflow: understand -> work -> update

  • Session start: orchestrator loads project context with memory overview.
  • Before tasks: orchestrator queries relevant architecture, patterns, and prior decisions.
  • After tasks: orchestrator records new features, decisions, and patterns.

Custom commands included in this repo:

  • /user:bootstrap-memory - index and bootstrap knowledge for a new project.
  • /user:save-memory - record what was learned or changed in the current session.

Why it matters: you stop re-explaining your codebase every new session.

Agent Reference

Agent Role Model Can Edit Can Bash Can Delegate
orchestrator Primary router and synthesis layer anthropic/claude-opus-4-6 No Yes (deny list) Yes (to specialists)
plan Architecture/spec planning anthropic/claude-opus-4-6 No No No
build Implementation and tests openai/gpt-5.3-codex Yes Yes No
debug Root-cause analysis anthropic/claude-opus-4-6 No Yes (deny list) No
devops Git, CI/CD, deployments anthropic/claude-sonnet-4-20250514 Yes Yes (deny list) No
explore Read-only codebase analysis anthropic/claude-sonnet-4-20250514 No No No
review Validation and quality gate anthropic/claude-opus-4-6 No Yes (deny list) No

Model Configuration

Use the interactive configurator:

curl -fsSL https://raw.githubusercontent.com/0xK3vin/OpenCodeOrchestrator/main/configure.sh | bash

Presets available:

  • Recommended: Opus reasoning, Sonnet execution, Codex coding (default profile).
  • All Claude: Opus reasoning and Sonnet for execution/coding.
  • All OpenAI: o3 reasoning, GPT-4.1 execution, Codex coding.
  • All Google: Gemini Pro reasoning/coding with Gemini Flash execution.
  • Budget: Sonnet everywhere.
  • Custom: choose models interactively.

Custom mode supports both:

  • Per-tier model selection (Reasoning, Execution, Coding).
  • Per-agent model selection across all 7 agents.

File Structure

OpenCodeOrchestrator/
├── README.md
├── install.sh
├── configure.sh
├── LICENSE
├── assets/
│   ├── banner.svg
│   ├── architecture.svg
│   └── workflow.svg
├── config/
│   ├── opencode.json
│   ├── AGENTS.md
│   └── package.json
├── agents/
│   ├── orchestrator.md
│   ├── build.md
│   ├── plan.md
│   ├── debug.md
│   ├── devops.md
│   ├── explore.md
│   └── review.md
├── commands/
│   ├── bootstrap-memory.md
│   └── save-memory.md
└── docs/
    ├── agents.md
    ├── configuration.md
    └── workflows.md

Installed layout in ~/.config/opencode/:

  • opencode.json, AGENTS.md, package.json
  • agents/*.md
  • commands/*.md
  • docs/*.md

Installation

Manual Install
  1. Clone this repo.
  2. Copy config/opencode.json to ~/.config/opencode/opencode.json.
  3. Copy config/AGENTS.md to ~/.config/opencode/AGENTS.md.
  4. Copy all agents/*.md to ~/.config/opencode/agents/.
  5. Copy all commands/*.md to ~/.config/opencode/commands/.
  6. Optionally copy docs/*.md to ~/.config/opencode/docs/.
  7. Copy config/package.json to ~/.config/opencode/package.json and run npm install.

Post-install:

  • Edit ~/.config/opencode/opencode.json with your real API keys.
  • Optionally run model configuration: curl -fsSL https://raw.githubusercontent.com/0xK3vin/OpenCodeOrchestrator/main/configure.sh | bash
  • Configure/enable MCP servers you want to use.
  • Restart OpenCode.

Configuration

For full configuration details, see docs/configuration.md.

  • The template sets default_agent to orchestrator, so OpenCode launches into the orchestrator by default.
  • Model customization: tune per-agent model values in agents/*.md.
  • Permission tuning: tighten or relax read/edit/write/bash/task permissions in frontmatter and opencode.json.
  • Adding/removing agents: update agents/, opencode.json, and orchestrator routing docs.
  • MCP setup: configure mcp entries in opencode.json for local or remote servers.

Design Decisions

  • Model tiering: Opus for deep reasoning/review, Sonnet for operational/read-only tasks, Codex for code implementation.
  • DRY tool docs: tool behavior lives in global skill/tool prompts, not duplicated inside every agent prompt.
  • Bash deny list over allowlist: broad utility with guardrails against accidental destructive commands.
  • Orchestrator cannot edit: enforces delegation discipline and clear responsibility boundaries.
  • Review loop quality gate: non-trivial changes are verified before completion.
  • Parallel dispatch: Independent workstreams are delegated simultaneously to reduce wall-clock time; the orchestrator synthesizes results after all branches complete.

MCP Servers

  • megamemory: persistent project knowledge graph.
  • exa: free web search, code docs lookup, and URL crawling via Exa AI. No API key required.
  • grep_app: free GitHub code search across millions of public repos via grep.app. No API key required.

License

MIT. See LICENSE.

About

Multi-agent orchestrator configuration for OpenCode — 7 specialized agents with persistent memory via megamemory

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages