Claude Code Now Supports Multi-Agent Teams — Orchestrate Parallel AI Workers
Anthropic has launched **Multi-Agent Teams** for Claude Code, enabling developers to spin up multiple AI agents that work simultaneously on complex tasks.
The architecture centers on a **Team Lead** session that creates the team, assigns tasks, and synthesizes results. Each **Teammate** operates in its own independent context window with a shared task list and direct inter-agent messaging — unlike subagents, which can only report back to the caller.
Key capabilities include:
- **Parallel code review**: spawn reviewers focused on security, performance, and test coverage simultaneously
- **Competing hypothesis debugging**: multiple agents investigate different theories and actively challenge each other's findings
- **Cross-layer development**: separate agents own frontend, backend, and tests without file conflicts
- **Plan approval gates**: the lead can require teammates to submit plans before implementing
- **Quality hooks**: automated checks when tasks are created, completed, or teammates go idle
Teammates can be assigned different models (e.g., Haiku for lightweight tasks) and can use subagent definitions for reusable role configurations. The system handles task dependencies automatically — blocked tasks unblock when their dependencies complete.
Anthropicrecommends starting with 3-5 teammates and 5-6 tasks per agent. The feature is experimental, requiring opt-in via the `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS` setting, and currently supports in-process and tmux/iTerm2 split-pane display modes.
📄 Source
Anthropic