Skip to main content
An Agent is a configuration object that describes what your AI agent is — its identity, capabilities, and boundaries. Agents don’t execute themselves; they’re run by a Runner.

Your first agent

from afk.agents import Agent

agent = Agent(
    name="assistant",                          # ← Identity (used in logs, telemetry)
    model="gpt-5.2-mini",                      # ← Which LLM to use
    instructions="Be helpful and concise.",     # ← System prompt
)
That’s it. Three fields define a working agent. Everything else is optional.

Agent fields reference

FieldTypeDefaultPurpose
modelstr or LLMrequiredLLM model name or pre-built client instance
namestrNoneAgent identity for logs, telemetry, and subagent routing
instructionsstrNoneSystem prompt — what the agent knows and how it behaves
instruction_filestr or PathNonePath to a .txt or .md file containing the system prompt
prompts_dirstr or PathNoneDirectory containing prompt files resolved by the prompt store
toolslist[tool]NoneTyped functions the agent can call
subagentslist[Agent]NoneSpecialist agents this agent can delegate to
skillslist[str]NoneSkill names to resolve from skills_dir
skills_dirstr or Path".agents/skills"Directory containing skill packs
mcp_serverslist[MCPServerLike]NoneMCP server configs for external tool discovery
fail_safeFailSafeConfigdefaultsStep limits, cost budgets, timeout, and failure policies
context_defaultsdictNoneDefault JSON context merged into each run before caller context
inherit_context_keyslist[str]NoneContext keys inherited from parent agent in delegation
model_resolvercallableNoneCustom function to resolve model names to LLM clients
instruction_roleslist[InstructionRole]NoneStructured instruction sections with role-based ordering
policy_roleslist[PolicyRole]NoneRole-based policy rules applied during execution
policy_enginePolicyEngineNonePolicy engine for tool/action gating
subagent_routerSubagentRouterNoneCustom routing logic for subagent delegation
max_stepsint20Maximum agent loop iterations
tool_parallelismintNoneMax concurrent tool executions per step
subagent_parallelism_modestr"configurable"How subagent concurrency is managed
reasoning_enabledboolNoneEnable extended thinking / chain-of-thought
reasoning_effortstrNoneThinking effort level (e.g. "low", "medium", "high")
reasoning_max_tokensintNoneToken budget for extended thinking
skill_tool_policySkillToolPolicyNoneSecurity policy for skill-provided tool execution
enable_skill_toolsboolTrueWhether to expose skill tools to the agent
enable_mcp_toolsboolTrueWhether to expose MCP-discovered tools to the agent
runnerRunnerNonePre-bound runner instance (advanced usage)

Single agent vs multi-agent

A single agent handles everything. Best for focused tasks.
agent = Agent(
    name="classifier",
    model="gpt-5.2-mini",
    instructions="""
    Classify the input into one of: positive, negative, neutral.
    Output only the label.
    """,
)

result = runner.run_sync(agent, user_message="I love this product!")
print(result.final_text)  # "positive"
Use when: The task is well-defined and doesn’t need specialized sub-expertise.

How subagent delegation works

When an agent has subagents, AFK automatically generates transfer tools (transfer_to_researcher, transfer_to_writer). The coordinator calls these like any other tool. Each subagent runs a full agent loop with its own model, instructions, and tools. The coordinator sees only the subagent’s final_text.

Adding safety limits

Every agent should have a FailSafeConfig in production:
from afk.agents import Agent, FailSafeConfig

agent = Agent(
    name="safe-agent",
    model="gpt-5.2-mini",
    instructions="...",
    tools=[...],
    fail_safe=FailSafeConfig(
        max_steps=15,              # Max agent loop iterations
        max_llm_calls=10,          # Max LLM API calls
        max_tool_calls=20,         # Max tool executions
        max_wall_time_s=60.0,      # Max run duration in seconds
        max_total_cost_usd=0.50,   # Max estimated cost

        # What to do when things fail
        llm_failure_policy="retry_then_degrade",         # "retry_then_fail" | "retry_then_degrade" | "fail_fast"
        tool_failure_policy="continue_with_error",       # "continue_with_error" | "retry_then_continue" | "fail_run"
        subagent_failure_policy="continue",               # "continue" | "retry_then_fail" | "skip_action"

        # Fallback model chain for LLM resilience
        fallback_model_chain=["gpt-5.2-mini", "gpt-5.2-nano"],
    ),
)
Always set max_total_cost_usd in production. A runaway agent loop can spend significant API credits in minutes.

Policy-aware agents

Attach a PolicyEngine to control what the agent can do:
from afk.agents import Agent, PolicyEngine, PolicyRule

policy = PolicyEngine(rules=[
    PolicyRule(
        rule_id="require-approval-for-writes",
        condition=lambda event: event.tool_name and "write" in event.tool_name,
        action="request_approval",
        reason="Write operations need human approval",
    ),
    PolicyRule(
        rule_id="deny-admin-tools",
        condition=lambda event: event.tool_name and "admin" in event.tool_name,
        action="deny",
        reason="Admin tools are disabled in this environment",
    ),
])

runner = Runner(policy_engine=policy)
Policy decisions: allow (default), deny, request_approval (human-in-the-loop), or request_user_input.

Design guidelines

  • Start with one agent. Only add subagents when you have clear evidence that the task needs specialized expertise.
  • Keep instructions focused. Vague instructions produce vague results. Tell the agent exactly what to do and what not to do.
  • Use typed tools. Every tool argument should be a Pydantic model. Untyped arguments bypass validation.
  • Set cost limits early. Add FailSafeConfig before your first deployment, not after your first runaway bill.

Next steps