One prompt. Three roles.
A handoff you can actually watch.
Director, Architect, Engineers, each running the model you assigned to it. Real phases. Real worktrees. No silent state.
No active handoff
No active handoff
No active handoff
One agent
Designed to work
alongside you.
Each agent is autonomous, observable, and accountable. Spawn one, or orchestrate a fleet.
Autonomous
Plan, execute,
verify.
Director, Architect, and Workers coordinate through an append-only protocol. No silent state.
Local-first
Your machine.
Your models.
Runs on your hardware, with the providers you choose. Nothing leaves unless you say so.
Run the AI you already pay for. Together.
10 providers. Cloud or local. One per role. Your keys, your accounts, your bill. We don't sell a single token.
anthropic / claude-opus
Owns the call. Writes the spec.
openai / gpt-5-codex
Briefs the build. Reviews the diff.
ollama / qwen-coder · local
Ships the code. Greens the tests.
Director
Holds the vision.
Owns the outcome.
The Director takes your brief, decomposes it into goals, and dispatches the work. They stay in the loop end-to-end, reviewing every milestone, deciding when to escalate, when to keep going, and when the job is done. No silent retries, no runaway loops. Every important call goes through them.
Architect
Maps the terrain.
Drafts the plan.
The Architect partners with the Director to turn intent into structure. They survey the codebase, identify constraints, and produce a precise engineering spec, interfaces, dependencies, acceptance criteria. Engineers never start work without an Architect-approved plan, so nothing gets built twice.
Engineers
Build in parallel.
Ship the work.
Engineers pick up scoped tasks from the Architect's spec and execute independently. Each one writes, tests, and verifies their slice in isolation, reporting completion back up the chain. Multiple engineers run side-by-side without stepping on each other, your forest scales out, not just up.
They branch. They have brains.
Sometimes one waits on another.
Architect plans. Director delegates. Engineers build in their own git worktrees, each with their own brain. When one needs another's output, it pauses until that worktree ships. Architect reviews. Director ships.
Memory that lives in your project.
Reaches across them.
Every workspace builds its own memory, code patterns, decisions, lessons learned the hard way. Pro workspaces can recall from other workspaces when relevant.
Skill evolution
Plant a seed.
Watch it grow.
Every task teaches your agents something new. Skills compound across runs and sessions.
Memory that lasts
A garden of
lessons learned.
Successful patterns are remembered, refined, and shared between agents in your forest.
Bloom
Skills that
flower over time.
What started as a single task becomes a library of capabilities. Your agents get better the more they work.
Give each agent different tools.
Install skills and MCP servers per role. The Director gets planning tools. The Architect gets review. The Engineer gets file IO, terminal, and the database.
- plannerSKILLspec-first decomposition
- reviewerSKILLstatic review + ADRs
- tdd-buddySKILLred → green → refactor
- cavemanSKILLterse, no fluff
- superpowersSKILLcurated workflows
- filesystemMCPread/write your repo
- shellMCPrun, test, deploy
- githubMCPPRs, issues, reviews
- postgresMCPschema + queries
- stripeMCPbilling read-only
- figmaMCPdesign handoff
- slackMCPsend + read messages
- linearMCPtickets + cycles
- notionMCPdocs + databases
- sentryMCPerrors + traces
- vercelMCPdeploys + logs

- plannerSKILL
- reviewerSKILL
- linearMCP

- reviewerSKILL
- postgresMCP
- sentryMCP

- tdd-buddySKILL
- shellMCP
- githubMCP
Collective intelligence
A network
that learns.
Every agent contributes to a shared brain. Patterns, decisions, and outcomes all feed back.
New connections
Neurons fire,
neurons wire.
Each new task forms a fresh connection. The more you run, the smarter the network becomes.
Always growing
Built to
evolve forever.
No retraining required. Your forest gets sharper every day, with no manual upkeep.
No new mental model.
Your repo is the workspace.
Point ForestOps at a directory. That's the workspace. Memory lives there. Skills attach there. Worktrees branch from there. When you leave the repo, the team stays with it.
- 01One directory = one workspace
- 02Memory + skills live next to the code
- 03Multi-repo? Open as many as you need.
Common questions.
Yes. Anthropic, OpenAI (Codex), DeepSeek, MiniMax, Qwen, Kimi, Z.AI, OpenRouter, Requesty, or run local with LM Studio and Ollama. We never proxy your model traffic.
A single Director-initiated run. Free is capped at 5/week. Plus and above are unlimited.
Nowhere. Engineers operate on your local filesystem. The cloud sees ciphertext memory blobs only.
Embeddings happen on your device (BGE-small-en-v1.5, 384-dim). Each chunk is AES-256-GCM encrypted before it leaves. We store the ciphertext at rest in our DB.
Yes. LM Studio or Ollama as your provider. Memory stays on-disk. Orchestration runs locally, the API is just for billing + WebSocket relay.
Use it. ForestOps embeds it as a runner. You get the same Claude Code you know, plus a Director, Architect, memory, and a team.
Memory across runs. MCPs. Unlimited skills per agent. Unlimited jobs. Free is for trying it out.
Source is closed. What you can verify on your own network: your code never leaves your machine, model traffic goes direct to your provider, and any memory we receive is already encrypted.
