LLMRequest (what you send) and LLMResponse (what you get back). These contracts normalize the differences between providers so your agent code never touches provider-specific types.
The flow
The adapter translates between AFK’s normalized contracts and the provider’s native format.LLMRequest
- Fields
- Example
| Field | Type | Purpose |
|---|---|---|
messages | list[Message] | Conversation history (system, user, assistant, tool) |
model | str | Model identifier |
tools | list[ToolSchema] | Available tool schemas |
temperature | float | Sampling temperature (0.0–2.0) |
max_tokens | int | None | Max output tokens |
top_p | float | None | Nucleus sampling |
response_format | ResponseFormat | None | Structured output format |
stop | list[str] | None | Stop sequences |
LLMResponse
- Fields
- Example
| Field | Type | Purpose |
|---|---|---|
content | str | None | Text response from the model |
tool_calls | list[ToolCall] | Tool calls requested by the model |
model | str | Model that generated the response |
usage | Usage | Token counts (prompt, completion, total) |
finish_reason | str | Why generation stopped (stop, tool_calls, length) |
latency_ms | float | End-to-end request latency |
Structured output
Request structured JSON output with a Pydantic model:Message types
| Role | Purpose | Source |
|---|---|---|
system | Agent instructions | From Agent.instructions |
user | User input | From user_message parameter |
assistant | Model responses | Generated by the LLM |
tool | Tool results | From tool execution |