Code Kit v5.0 released for CC's new Agent Teams feature.
Claude FastClaude Fast
Agents

Claude Code Agent Patterns: 6 Orchestration Strategies That Work

Six proven agent orchestration patterns for Claude Code: orchestrator, fan-out, validation chain, specialist routing, progressive refinement, and watchdog.

Stop configuring. Start shipping.Everything you're reading about and more..
Agentic Orchestration Kit for Claude Code.

Problem: You know Claude Code can spawn sub-agents, but you're stuck using the same pattern every time. You dispatch a few parallel agents, hope for the best, and manually stitch results together. There are better ways to organize agent work, and the pattern you pick determines whether your output is reliable or a mess.

This guide covers six agent orchestration patterns used in real Claude Code projects. Each pattern fits different situations. Pick the wrong one and you waste tokens, create merge conflicts, or get inconsistent output. Pick the right one and agents handle complex work with minimal oversight.

The Orchestrator Pattern

A central AI thread coordinates specialist agents. It doesn't write code itself. It plans, delegates, reviews, and routes.

When to use: Multi-domain features that need coordination across frontend, backend, and database. Any task where someone needs to see the big picture.

How it works in Claude Code: Your main chat session becomes the orchestrator. It reads the requirements, creates a plan, then uses the Task tool to dispatch specialists. When results come back, it reviews them and decides what happens next.

You are the orchestrator. For this feature request, create a plan that:
1. Breaks the work into domain-specific tasks
2. Identifies dependencies between tasks
3. Dispatches each task to a sub-agent with explicit file scope
4. Reviews outputs before marking complete

Do NOT write implementation code yourself. Coordinate only.

The orchestrator pattern maps directly to how CLAUDE.md configurations work in practice. Your CLAUDE.md defines routing rules, and the central thread follows them. The ClaudeFast Code Kit formalizes this as /team-plan followed by /build, where the orchestrator generates a full task dependency graph before any agent writes a line of code.

When NOT to use: Simple tasks that a single agent handles in one pass. If you're adding a utility function or fixing a typo, orchestration overhead costs more than it saves.

The Fan-Out / Fan-In Pattern

Dispatch multiple agents in parallel, then merge their results into a single output.

When to use: Research tasks, multi-file analysis, any work where agents operate on independent inputs. Code reviews across separate modules. Gathering information from different parts of a codebase before making a decision.

How it works in Claude Code: Spawn parallel sub-agents using multiple Task tool calls in a single message. Each agent reads different files or analyzes a different concern. When all agents return, the central thread synthesizes.

Complete these 4 tasks using parallel sub-agents:

1. Read src/api/ and list all endpoints missing input validation
2. Read src/auth/ and identify any hardcoded secrets or weak patterns
3. Read src/db/ and check for missing indexes on frequently queried columns
4. Read src/utils/ and flag any functions with no error handling

After all agents report back, synthesize findings into a prioritized action list.

The fan-out phase runs fast because agents don't share state. The fan-in phase is where value gets created: the orchestrator spots connections between findings that no individual agent saw. An auth issue combined with a missing input validation might be a critical vulnerability that neither finding alone suggests.

When NOT to use: When agents need to modify the same files. Parallel writes to overlapping files cause merge conflicts. If work isn't truly independent, use the sequential chain patterns instead.

The Validation Chain Pattern

A builder agent creates code. A separate validator agent checks it. They never overlap roles.

When to use: Production code changes, security-sensitive work, anything where incorrect output has high cost. This is the pattern to reach for when you can't afford to ship a bug.

How it works in Claude Code: Create two tasks with a dependency. The builder writes code. The validator runs after the builder finishes, reads the output, runs tests, and reports issues without modifying files.

TaskCreate(
  subject="Build payment webhook handler",
  description="Create Stripe webhook handler in src/api/webhooks/stripe.ts.
  Handle checkout.session.completed, payment_intent.failed events.
  Verify webhook signatures. Include error handling."
)

TaskCreate(
  subject="Validate payment webhook handler",
  description="Read src/api/webhooks/stripe.ts. Verify:
  - Webhook signature verification exists
  - Both event types handled with proper responses
  - Error handling covers malformed payloads
  - No hardcoded secrets
  Report issues only. Do NOT modify any files."
)

TaskUpdate(taskId="2", addBlockedBy=["1"])

The addBlockedBy parameter ensures the validator waits. This is the core of builder-validator orchestration, and it works because the validator starts with fresh eyes. It doesn't share the builder's assumptions or blind spots.

If the validator finds problems, it creates a fix task that routes back to a builder. A new validator chains behind that fix. The cycle narrows until the output is correct. See the full team orchestration guide for the complete feedback loop.

When NOT to use: Rapid prototyping where speed matters more than correctness. The validation step doubles compute cost per task. For throwaway code or exploration, a single agent is faster and cheaper.

The Specialist Routing Pattern

Match tasks to domain-expert agents based on task type. A frontend task goes to a frontend specialist. A database migration goes to a database expert. Each specialist carries domain-specific instructions, conventions, and tool restrictions.

When to use: Large projects with multiple domains. Teams that have established conventions per domain. Any workflow where generic agent instructions produce inconsistent output across different parts of the codebase.

How it works in Claude Code: Define specialist agents in .claude/agents/ or encode routing rules in your CLAUDE.md. The orchestrator reads the task, identifies the domain, and dispatches to the appropriate specialist.

<!-- In CLAUDE.md -->
 
## Agent Routing Table
 
| Task Domain | Route To            | File Scope           |
| ----------- | ------------------- | -------------------- |
| React/UI    | frontend-specialist | src/components/      |
| API routes  | backend-engineer    | src/api/, src/lib/   |
| Database    | database-specialist | src/db/, migrations/ |
| Security    | security-auditor    | Any (read-only)      |
| Tests       | quality-engineer    | tests/, **tests**/   |

Each specialist defined in .claude/agents/ inherits your project's CLAUDE.md context automatically, so they pick up coding standards without extra configuration. The specialist's own definition file adds domain-specific instructions: which frameworks to use, naming conventions, error handling patterns specific to that layer.

This pattern scales well. Adding a new domain means adding a new agent definition and a row to the routing table. It doesn't require rewriting existing agents. The custom agents guide covers how to define specialists with YAML frontmatter.

When NOT to use: Small projects where a single agent knows enough about everything. Routing overhead is wasted when there's only one domain to route to.

The Progressive Refinement Pattern

Start with a rough draft, then improve it through multiple passes. Each pass focuses on a different quality dimension.

When to use: Content generation, complex code architecture, any task where getting it right the first time is unlikely. Works well for writing blog posts, designing API schemas, or creating configuration files that need to satisfy multiple constraints.

How it works in Claude Code: Chain agents sequentially, where each agent takes the previous output and refines one aspect.

Phase 1 - Draft: "Generate the initial API schema for a task management
system. Include all entities, relationships, and basic validation rules."

Phase 2 - Security review: "Review this schema. Add authentication
requirements, permission checks, and input sanitization rules.
Don't change the core structure."

Phase 3 - Performance review: "Review the schema for performance.
Add indexes, identify N+1 query risks, suggest denormalization
where read performance matters."

Phase 4 - Final validation: "Verify the schema is consistent.
Check that all referenced entities exist, foreign keys are valid,
and naming conventions are uniform."

Each phase has a narrow focus. The draft agent prioritizes completeness. The security agent adds constraints. The performance agent optimizes. The validator checks consistency. No single agent tries to handle all concerns at once.

Progressive refinement mirrors how experienced developers actually work. You don't write production-ready code in one pass. You draft, review for correctness, review for performance, then polish. This pattern just distributes those passes across specialized agents.

When NOT to use: Tasks that must be done in one shot because the output can't be incrementally improved. Also wasteful for simple tasks where the first pass is good enough.

The Watchdog Pattern

Background agents that monitor for specific conditions and alert or act when triggered. They run continuously alongside your main work, checking for issues without blocking your flow.

When to use: Long-running sessions where drift is possible. Monitoring context health, checking for regressions during refactoring, or watching for build failures while you work on something else.

How it works in Claude Code: Background a monitoring agent with Ctrl+B and continue working. The watchdog periodically checks its assigned condition. When it finds something, results surface automatically in your task list.

Background task: Monitor the test suite while I refactor the auth module.
Every time I complete a change, run the test suite for src/auth/.
If any test fails, immediately create a task with:
- Which test failed
- The assertion error
- Which file I likely broke based on the test name

The context recovery hook is a watchdog pattern implemented at the infrastructure level. It monitors context utilization and triggers recovery actions when the context window fills up. Status monitoring hooks that track agent health and progress are another common watchdog implementation.

Watchdogs work best when combined with the async workflows approach, where backgrounded agents run independently and surface results when you're ready to look at them.

When NOT to use: Short sessions where monitoring overhead exceeds the value of what you're monitoring. If your task finishes in two minutes, a watchdog that checks every thirty seconds adds nothing.

Combining Patterns

Real projects don't use a single pattern in isolation. A typical complex feature might look like this:

  1. Orchestrator reads the requirements and creates a plan
  2. Specialist routing dispatches tasks to domain experts
  3. Fan-out runs independent domain tasks in parallel
  4. Validation chains verify each specialist's output
  5. Progressive refinement polishes the integrated result
  6. Watchdog monitors the test suite throughout

The key skill isn't memorizing patterns. It's recognizing which pattern fits the current task. Start with the simplest pattern that could work. Add complexity only when simpler patterns fail.

For a complete orchestration system with all six patterns pre-configured, including 18 specialist agents, routing rules, and dependency management, see the ClaudeFast Code Kit. Or start by adding one pattern at a time to your CLAUDE.md and see how it changes your workflow.

Next steps:

Last updated on

On this page

Stop configuring. Start shipping.Everything you're reading about and more..
Agentic Orchestration Kit for Claude Code.