Persist conversation state, resume runs, and compact threads.
AFK’s memory system persists conversation state across runs. Use it for multi-turn conversations, run resumption after interrupts, long-term knowledge retention, and vector-based semantic search.
User messages, assistant responses, tool calls and results
After each run step
Checkpoint
Full run state at a point in time
At step boundaries (pre-LLM, post-tool)
State (KV)
Checkpoint pointers, effect journal, background tool state
During and after runs
Long-term memory
Persistent knowledge with optional embeddings
Via upsert_long_term_memory
What’s NOT stored automatically: Raw LLM provider responses or internal
framework temporaries. Only conversation-visible records and explicit state
writes are persisted.
If a run is interrupted (crash, timeout, pause for approval), resume from the last checkpoint:
# Start a run that might be longresult = await runner.run(agent, user_message="Analyze this dataset...")# If interrupted, resume laterif result.state == "interrupted": resumed = await runner.resume( agent, run_id=result.run_id, thread_id=result.thread_id, ) print(resumed.final_text)
Checkpoints are written at key boundaries: before each LLM call, after
each tool batch, and after each step completes. On resume, completed tool
calls are replayed from the effect journal — no duplicate side effects.
AFK ships with four backends. All implement the MemoryStore protocol.
In-memory (default)
SQLite
PostgreSQL
Redis
State lives in process memory. Fast, no setup, but lost on restart.
from afk.memory import InMemoryMemoryStorerunner = Runner(memory_store=InMemoryMemoryStore())# Or just: Runner() — in-memory is the default
Use for: Development, testing, short-lived scripts.
Persistent local storage with JSON serialization and local vector search.
from afk.memory.adapters.sqlite import SQLiteMemoryStorerunner = Runner(memory_store=SQLiteMemoryStore(path="agent_memory.sqlite3"))
Features: WAL mode, text search, vector similarity search (cosine), atomic upsert.Use for: Local development with persistence, single-process deployments.
Production-grade backend with pgvector support for vector search.
from afk.memory.adapters.postgres import PostgresMemoryStorerunner = Runner( memory_store=PostgresMemoryStore(dsn="postgresql://user:pass@host/db"))
Use for: Production multi-process deployments.
In-memory store backed by Redis for shared state across processes.
from afk.memory.adapters.redis import RedisMemoryStorerunner = Runner( memory_store=RedisMemoryStore(url="redis://localhost:6379"))
Use for: Shared state across workers, ephemeral but durable-enough sessions.
Declare capabilities to tell the framework which features your backend
supports. Features like vector search are only used when the backend
declares support.
Backends that support vector search (SQLite, Postgres) can find semantically similar memories:
# Search by embedding similarityresults = await memory_store.search_long_term_memory_vector( user_id="user-123", query_embedding=embedding_vector, # list[float] from your embedding model scope="knowledge", limit=5, min_score=0.7,)for memory, score in results: print(f"{score:.2f}: {memory.text}")