Skip to main content
Adapters translate between AFK’s normalized contracts (LLMRequest/LLMResponse) and provider-specific APIs. AFK ships with three built-in adapters and supports custom adapters for any provider.

Built-in providers

OpenAI

Direct integration via the OpenAI Python SDK. Supports all GPT-4.1 and o-series models.

Anthropic

Direct integration via the Anthropic SDK. Supports Claude Opus 4.5 and Opus.

LiteLLM

Proxy adapter for 100+ providers (Azure, Bedrock, Gemini, Mistral, local models, etc.).

Capability comparison

FeatureOpenAIAnthropicLiteLLM
Text generation
Tool calling (provider-dependent)
Structured outputProvider-dependent
Streaming
Vision (image input)Provider-dependent
Custom endpoints

Usage

from afk.llms import LLMBuilder

# OpenAI
openai_client = LLMBuilder().provider("openai").model("gpt-5.2-mini").build()

# Anthropic
anthropic_client = LLMBuilder().provider("anthropic").model("claude-opus-4-5").build()

# LiteLLM (any provider)
gemini_client = LLMBuilder().provider("litellm").model("gemini/gemini-2.5-pro").build()

Custom adapter

Register your own adapter for unsupported providers or custom inference servers:
1

Implement the LLMAdapter protocol

from afk.llms import LLMAdapter, LLMRequest, LLMResponse

class MyCustomAdapter(LLMAdapter):
    def __init__(self, base_url: str, api_key: str):
        self.base_url = base_url
        self.api_key = api_key

    async def generate(self, request: LLMRequest) -> LLMResponse:
        # Translate LLMRequest → your provider's format
        payload = self._build_payload(request)

        # Make the API call
        async with httpx.AsyncClient() as client:
            resp = await client.post(
                f"{self.base_url}/v1/complete",
                json=payload,
                headers={"Authorization": f"Bearer {self.api_key}"},
            )

        # Translate provider response → LLMResponse
        return self._parse_response(resp.json())

    async def generate_stream(self, request: LLMRequest):
        # Optional: implement streaming
        ...
2

Register the adapter

from afk.llms import register_adapter

register_adapter("my-provider", MyCustomAdapter)
3

Use it

client = LLMBuilder().provider("my-provider").model("my-model").build()

agent = Agent(name="demo", model=client, instructions="...")

Custom transport

Override the HTTP transport layer for any adapter (useful for proxies, mTLS, or custom retry logic):
import httpx

transport = httpx.AsyncHTTPTransport(
    limits=httpx.Limits(max_connections=50, max_keepalive_connections=10),
    retries=3,
)

client = (
    LLMBuilder()
    .provider("openai")
    .model("gpt-5.2-mini")
    .transport(transport)
    .build()
)

Next steps