Introduction to Multi-Agent Systems
Multi-agent systems (MAS) consist of multiple AI agents that work together to solve complex problems. Each agent is specialized in a specific domain or task, and they communicate, delegate, and collaborate to achieve outcomes that would be difficult or impossible for a single agent. Think of it as an AI team where each member has a distinct role.
Multi-agent architectures shine in scenarios that require diverse expertise, parallel processing, or quality assurance through peer review. A research agent gathers data, an analyst agent interprets it, and a writer agent creates the final report — each using the tools and prompts optimized for their role.
Multi-Agent Collaboration Patterns
- Sequential (Pipeline): Agents work in order — output of one becomes input of the next
- Hierarchical: A supervisor agent delegates tasks to worker agents and synthesizes results
- Collaborative: Agents discuss and debate to reach consensus
- Competitive: Multiple agents propose solutions, and the best one is selected
- Swarm: Agents dynamically hand off to the most appropriate agent based on context
CrewAI: Role-Based Multi-Agent Framework
CrewAI is one of the most popular multi-agent frameworks. It uses a crew metaphor where you define agents with specific roles, goals, and tools, then organize them into a crew that executes tasks collaboratively.
# CrewAI multi-agent system
from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool, WebsiteSearchTool
# Define specialized agents
researcher = Agent(
role="Senior Research Analyst",
goal="Find comprehensive, accurate information about AI industry trends",
backstory="""You are a seasoned research analyst with 15 years of experience
in technology market research. You excel at finding primary sources and
verifying information across multiple outlets.""",
tools=[SerperDevTool(), WebsiteSearchTool()],
llm="anthropic/claude-sonnet-4-20250514",
verbose=True,
)
analyst = Agent(
role="Data Analyst",
goal="Analyze research findings and extract actionable insights",
backstory="""You are a data analyst who specializes in turning raw research
into structured insights. You look for patterns, trends, and anomalies
that others might miss.""",
llm="anthropic/claude-sonnet-4-20250514",
verbose=True,
)
writer = Agent(
role="Technical Writer",
goal="Create clear, engaging reports from analytical insights",
backstory="""You are an award-winning technical writer known for making
complex topics accessible. Your reports are used by C-suite executives
for strategic decision-making.""",
llm="anthropic/claude-sonnet-4-20250514",
verbose=True,
)
# Define tasks
research_task = Task(
description="""Research the current state of AI agents in enterprise software.
Focus on: adoption rates, key vendors, ROI data, and implementation challenges.
Provide at least 5 data points with sources.""",
agent=researcher,
expected_output="Detailed research findings with data points and sources",
)
analysis_task = Task(
description="""Analyze the research findings. Identify:
1. Top 3 trends
2. Market size and growth projections
3. Key challenges and opportunities
4. Recommendations for enterprises considering AI agents""",
agent=analyst,
expected_output="Structured analysis with trends, projections, and recommendations",
context=[research_task], # This task depends on research
)
report_task = Task(
description="""Write an executive summary report (500 words) covering:
- Current state of AI agents in enterprise
- Key trends and market data
- Strategic recommendations
Make it suitable for C-suite executives.""",
agent=writer,
expected_output="Professional executive summary report",
context=[analysis_task], # This task depends on analysis
)
# Create and run the crew
crew = Crew(
agents=[researcher, analyst, writer],
tasks=[research_task, analysis_task, report_task],
process=Process.sequential, # Tasks run in order
verbose=True,
)
result = crew.kickoff()
print(result)
LangGraph Multi-Agent Architecture
LangGraph provides a graph-based approach to multi-agent systems. You define agents as nodes in a graph and control flow through edges and conditional routing. This gives you fine-grained control over agent communication patterns.
// LangGraph multi-agent system in TypeScript
import { StateGraph, MessagesAnnotation, START, END } from "@langchain/langgraph";
import { ChatAnthropic } from "@langchain/anthropic";
import { HumanMessage, SystemMessage, BaseMessage } from "@langchain/core/messages";
const model = new ChatAnthropic({ model: "claude-sonnet-4-20250514" });
// Define the state annotation
const AgentState = MessagesAnnotation;
// Supervisor agent that routes to specialists
async function supervisor(state: typeof AgentState.State) {
const response = await model.invoke([
new SystemMessage(
`You are a supervisor managing a team of agents:
- researcher: finds information
- coder: writes and reviews code
- reviewer: reviews quality of work
Based on the conversation, decide which agent should act next.
Respond with ONLY the agent name or "FINISH" if the task is complete.`
),
...state.messages,
]);
return {
messages: [response],
next: (response.content as string).trim().toLowerCase(),
};
}
// Specialized agent nodes
async function researcher(state: typeof AgentState.State) {
const response = await model.invoke([
new SystemMessage("You are a research agent. Find accurate information and cite sources."),
...state.messages,
]);
return { messages: [response] };
}
async function coder(state: typeof AgentState.State) {
const response = await model.invoke([
new SystemMessage("You are a coding agent. Write clean, well-documented code."),
...state.messages,
]);
return { messages: [response] };
}
async function reviewer(state: typeof AgentState.State) {
const response = await model.invoke([
new SystemMessage("You are a code reviewer. Check for bugs, security issues, and best practices."),
...state.messages,
]);
return { messages: [response] };
}
// Build the graph
const graph = new StateGraph(AgentState)
.addNode("supervisor", supervisor)
.addNode("researcher", researcher)
.addNode("coder", coder)
.addNode("reviewer", reviewer)
.addEdge(START, "supervisor")
.addConditionalEdges("supervisor", (state) => {
const lastMessage = state.messages[state.messages.length - 1];
const next = (lastMessage.content as string).trim().toLowerCase();
if (next === "finish") return END;
return next;
})
.addEdge("researcher", "supervisor")
.addEdge("coder", "supervisor")
.addEdge("reviewer", "supervisor");
const app = graph.compile();
// Run the multi-agent system
const result = await app.invoke({
messages: [
new HumanMessage("Build a REST API endpoint for user authentication with JWT tokens"),
],
});
Microsoft AutoGen
AutoGen takes a conversation-centric approach where agents communicate through messages. It supports both fully autonomous agent conversations and human-in-the-loop patterns.
# AutoGen multi-agent conversation
from autogen import AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager
# Configuration for the LLM
llm_config = {
"model": "claude-sonnet-4-20250514",
"api_type": "anthropic",
"api_key": os.environ["ANTHROPIC_API_KEY"],
}
# Create specialized agents
planner = AssistantAgent(
name="Planner",
system_message="""You are a project planner. Your job is to:
1. Break down user requirements into technical tasks
2. Define the order of execution
3. Assign tasks to the appropriate team member
Always create a clear, numbered plan before any work begins.""",
llm_config=llm_config,
)
engineer = AssistantAgent(
name="Engineer",
system_message="""You are a software engineer. You write clean, efficient code.
Always include error handling, type hints, and docstrings.
Follow the plan provided by the Planner.""",
llm_config=llm_config,
)
tester = AssistantAgent(
name="Tester",
system_message="""You are a QA engineer. Review code for:
1. Correctness and edge cases
2. Security vulnerabilities
3. Performance issues
4. Missing test cases
Write unit tests for all code produced by the Engineer.""",
llm_config=llm_config,
)
# User proxy to execute code and provide human input
user_proxy = UserProxyAgent(
name="User",
human_input_mode="TERMINATE", # Ask for input only at the end
code_execution_config={"work_dir": "workspace"},
)
# Create a group chat
group_chat = GroupChat(
agents=[user_proxy, planner, engineer, tester],
messages=[],
max_round=15,
speaker_selection_method="round_robin",
)
manager = GroupChatManager(
groupchat=group_chat,
llm_config=llm_config,
)
# Start the conversation
user_proxy.initiate_chat(
manager,
message="Build a Python function that validates email addresses with comprehensive tests",
)
OpenAI Swarm Pattern
The Swarm pattern enables lightweight agent handoffs. Instead of a supervisor, agents dynamically transfer control to the most appropriate agent based on the current context. This is ideal for customer service and support workflows.
// Swarm-style agent handoff pattern
interface SwarmAgent {
name: string;
instructions: string;
tools: Record<string, Function>;
handoff?: (context: string) => string | null; // Returns agent name or null
}
const agents: Record<string, SwarmAgent> = {
triage: {
name: "Triage Agent",
instructions: "Route customer queries to the appropriate specialist.",
tools: {},
handoff: (context) => {
if (context.includes("billing") || context.includes("payment")) return "billing";
if (context.includes("technical") || context.includes("bug")) return "technical";
if (context.includes("account") || context.includes("password")) return "account";
return null; // Stay with triage
},
},
billing: {
name: "Billing Specialist",
instructions: "Handle billing inquiries, refunds, and payment issues.",
tools: {
lookupInvoice: async (id: string) => `Invoice ${id}: $99.00, paid`,
processRefund: async (id: string) => `Refund initiated for ${id}`,
},
},
technical: {
name: "Technical Support",
instructions: "Diagnose and resolve technical issues.",
tools: {
checkSystemStatus: async () => "All systems operational",
createTicket: async (desc: string) => `Ticket created: ${desc}`,
},
},
};
async function swarmLoop(userMessage: string): Promise<string> {
let currentAgent = agents.triage;
// Check for handoff
if (currentAgent.handoff) {
const nextAgent = currentAgent.handoff(userMessage);
if (nextAgent && agents[nextAgent]) {
currentAgent = agents[nextAgent];
}
}
// Process with the selected agent
const response = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
system: `You are ${currentAgent.name}. ${currentAgent.instructions}`,
messages: [{ role: "user", content: userMessage }],
});
return response.content[0].type === "text" ? response.content[0].text : "";
}
Framework Comparison
| Feature | CrewAI | LangGraph | AutoGen |
|---|---|---|---|
| Approach | Role-based crews | Graph workflows | Conversation |
| Learning Curve | Low | Medium | Medium |
| Flexibility | Medium | Very High | High |
| Persistence | Built-in | Built-in | Manual |
| Best For | Task pipelines | Complex flows | Discussions |
Multi-Agent Best Practices
- Start with 2-3 agents: More agents means more complexity and cost. Add agents only when needed.
- Clear role boundaries: Each agent should have a distinct, non-overlapping responsibility.
- Structured communication: Define clear message formats between agents to avoid confusion.
- Human-in-the-loop: Add human checkpoints for critical decisions, especially in production.
- Monitor token usage: Multi-agent systems can consume tokens rapidly. Set budgets per agent and per task.
Summary
Multi-agent systems bring the power of specialization and collaboration to AI applications. Whether you use CrewAI's role-based approach, LangGraph's graph-based workflows, or AutoGen's conversation patterns, the key is defining clear agent roles, establishing communication protocols, and maintaining oversight. Start simple, measure performance, and add agents only when they provide clear value.