Skip to main content
The LLM layer normalizes communication with language models across all supported providers. Your agent code uses provider-agnostic contracts (LLMRequest / LLMResponse) while built-in adapters handle the provider-specific details.

The LLMBuilder

Create LLM clients with the builder pattern:
from afk.llms import LLMBuilder

client = (
    LLMBuilder()
    .provider("openai")
    .model("gpt-5.2-mini")
    .build()
)
1

Choose a provider

builder = LLMBuilder().provider("openai")
# Also: "anthropic", "litellm", or a custom adapter
2

Set the model

builder = builder.model("gpt-5.2-mini")
3

Add policies (optional)

builder = builder.profile("production")
# retry, timeout, rate limit, circuit breaker
4

Build

client = builder.build()

Supported providers

OpenAI

GPT-4.1, GPT-4.1-mini, GPT-4.1-nano, o-series

Anthropic

Claude Opus 4.5, Claude Opus 4.5

LiteLLM

100+ providers via the LiteLLM proxy
All providers expose the same LLMClient interface. Your agent code never touches provider-specific types.

Which provider should I use?

ScenarioRecommended
General purposeOpenAI gpt-5.2-mini
Complex reasoningOpenAI gpt-5.2 or Anthropic claude-opus-4-5
Cost-sensitiveOpenAI gpt-5.2-nano
Non-OpenAI/Anthropic modelLiteLLM adapter
Custom or self-hostedCustom adapter

How agents use the LLM layer

You rarely build LLMClient directly. Agents resolve their model automatically:
# Option 1: Model name (auto-resolved)
agent = Agent(name="demo", model="gpt-5.2-mini", ...)

# Option 2: Pre-built client (full control)
client = LLMBuilder().provider("openai").model("gpt-5.2-mini").profile("production").build()
agent = Agent(name="demo", model=client, ...)

Next steps