Skip to main content
When a single agent isn’t enough, AFK lets you orchestrate teams of specialist agents using delegation DAGs — directed acyclic graphs where a coordinator fans out work, collects results, and combines them.

Quick example

import asyncio
from afk.agents import Agent
from afk.core import Runner

# Specialist agents
researcher = Agent(name="researcher", model="gpt-5.2-mini", instructions="Find facts.")
writer = Agent(name="writer", model="gpt-5.2-mini", instructions="Write summaries.")
reviewer = Agent(name="reviewer", model="gpt-5.2-mini", instructions="Review for accuracy.")

# Coordinator
coordinator = Agent(
    name="coordinator",
    model="gpt-5.2-mini",
    instructions="""
    1. Ask 'researcher' for facts
    2. Ask 'writer' to summarize
    3. Ask 'reviewer' to verify
    Combine everything into a final response.
    """,
    subagents=[researcher, writer, reviewer],
)

async def main():
    runner = Runner()
    result = await runner.run(coordinator, user_message="Write a brief on quantum computing")
    print(result.final_text)

asyncio.run(main())

Delegation DAG model

The coordinator makes all delegation decisions. Subagents don’t talk to each other directly — they report back to the coordinator, which decides what to do next.

Orchestration pipeline

1

Plan

The coordinator decides which subagents to call and in what order, based on the user’s request and its instructions.
2

Validate

AFK validates the delegation request: does the subagent exist? Are the arguments valid? Does the policy allow it?
3

Schedule

The subagent is enqueued for execution. With fan-out, multiple subagents can run in parallel.
4

Execute

Each subagent runs a full agent loop (LLM calls, tool execution, etc.) and returns an AgentResult.
5

Aggregate

Results are collected according to the join policy. The coordinator receives them and decides whether to delegate more or produce a final response.

Join policies

When multiple subagents run in parallel (fan-out), the join policy controls how the coordinator handles results:
All subagents must succeed. Any failure fails the entire delegation batch.
coordinator = Agent(
    name="coordinator",
    subagents=[researcher, writer],
    join_policy="all_required",   # ← Default
)
Use when: Every subagent’s output is essential for the final result.

Failure handling

Failureall_requiredallow_optional_failuresfirst_successquorum
One subagent fails Batch fails Continue with othersContinue waitingContinue if quorum not needed
All subagents fail Batch fails Batch fails Batch fails Batch fails
Timeout Batch fails Use available results Batch failsDepends on completed count

Backpressure

AFK limits concurrent subagent executions to prevent resource exhaustion:
coordinator = Agent(
    name="coordinator",
    subagents=[agent_1, agent_2, agent_3, agent_4, agent_5],
    max_concurrent_subagents=3,   # ← At most 3 run in parallel
)
When the concurrency limit is reached, additional subagent calls are queued and execute as slots become available.

When to use multi-agent delegation

ScenarioSingle agentMulti-agent
Simple Q&A or classificationOverkill
Task needs different expertiseConsider
Need to parallelize workN/A
Task needs consensus/verificationN/A
Tight latency budget (fewer LLM calls) (more LLM calls)

Next steps