LLMRequest / LLMResponse) while built-in adapters handle the provider-specific details.
The LLMBuilder
Create LLM clients with the builder pattern:Supported providers
OpenAI
GPT-4.1, GPT-4.1-mini, GPT-4.1-nano, o-series
Anthropic
Claude Opus 4.5, Claude Opus 4.5
LiteLLM
100+ providers via the LiteLLM proxy
LLMClient interface. Your agent code never touches provider-specific types.
Which provider should I use?
| Scenario | Recommended |
|---|---|
| General purpose | OpenAI gpt-5.2-mini |
| Complex reasoning | OpenAI gpt-5.2 or Anthropic claude-opus-4-5 |
| Cost-sensitive | OpenAI gpt-5.2-nano |
| Non-OpenAI/Anthropic model | LiteLLM adapter |
| Custom or self-hosted | Custom adapter |
How agents use the LLM layer
You rarely buildLLMClient directly. Agents resolve their model automatically: