The A2A protocol enables agent communication across system boundaries — between services, organizations, or deployment environments. It builds on internal messaging by adding authentication, authorization, and external transport.
Architecture
Three integration layers
| Layer | What it does | When you need it |
|---|
| Internal Protocol | Typed envelopes with idempotency | Always (agents in the same system) |
| Auth Provider | Token validation, caller identity | When agents are in different services |
| External Adapter | HTTP/gRPC transport, discovery | When agents are in different systems |
Request flow
Client builds invocation request
from afk.agents.contracts import AgentInvocationRequest
request = AgentInvocationRequest(
target_agent="analyzer",
user_message="Analyze this dataset",
context={"source": "system-a"},
idempotency_key="analysis-42",
)
Auth provider validates
The server validates the auth token, extracts the caller identity, and checks authorization rules.
Server runs the agent
The target agent executes locally with a Runner, using the invocation request as input.
Response returned
response = await client.invoke(request)
print(response.final_text) # Agent's response
print(response.state) # "completed"
print(response.run_id) # For tracing
Invocation contracts
class AgentInvocationRequest(BaseModel):
target_agent: str # Which agent to invoke
user_message: str # The input message
context: dict = {} # Additional context
idempotency_key: str # Deduplication key
timeout_s: float = 60.0 # Max wait time
thread_id: str | None = None # For multi-turn
class AgentInvocationResponse(BaseModel):
final_text: str # Agent's response
state: str # completed, failed, degraded
run_id: str # Unique run identifier
error: str | None = None # Error details (if failed)
usage: UsageAggregate | None # Token usage
Hosting an A2A service
Expose your agents as an A2A-accessible service:
from afk.agents.a2a import A2AServiceHost
from afk.agents.a2a.auth import APIKeyA2AAuthProvider
from afk.agents import Agent
from afk.core import Runner
# Define the agent
analyzer = Agent(name="analyzer", model="gpt-5.2-mini", instructions="Analyze data.")
# Create auth provider
auth = APIKeyA2AAuthProvider(
keys={"system-a": "token-abc", "system-c": "token-xyz"},
server_secret="hmac-secret-for-key-hashing",
)
# Start the server
server = A2AServiceHost(
agents={"analyzer": analyzer},
runner_factory=lambda: Runner(),
auth_provider=auth,
host="0.0.0.0",
port=8080,
)
await server.start()
Authentication providers
AFK ships with three auth providers:
AllowAll (dev only)
API Key
JWT
Permits all requests without authentication. Never use in production.from afk.agents.a2a.auth import AllowAllA2AAuthProvider
auth = AllowAllA2AAuthProvider()
Validates requests against pre-shared API keys with HMAC-SHA256 hashing.from afk.agents.a2a.auth import APIKeyA2AAuthProvider
auth = APIKeyA2AAuthProvider(
keys={"caller-id": "secret-key"},
server_secret="hmac-server-secret",
)
Validates JWT tokens with configurable issuer and audience claims.from afk.agents.a2a.auth import JWTA2AAuthProvider
auth = JWTA2AAuthProvider(
secret="jwt-signing-secret",
algorithm="HS256",
issuer="https://auth.example.com",
audience="agent-service",
)
Google A2A adapter
For interoperability with Google’s A2A protocol, use the Google adapter:
from afk.agents.a2a.google_adapter import GoogleA2AAdapter
adapter = GoogleA2AAdapter(
protocol=internal_protocol,
auth_provider=auth,
)
The adapter normalizes Google A2A envelope formats to AFK’s internal protocol types.
Security considerations
| Concern | Mechanism |
|---|
| Authentication | Token-based (API keys, JWT, OAuth) |
| Authorization | Per-agent access control (which callers can invoke which agents) |
| Idempotency | idempotency_key prevents duplicate processing on retries |
| Rate limiting | Configure per-caller request limits |
| Input validation | All requests validated against AgentInvocationRequest schema |
| Cost isolation | Each invocation has its own FailSafeConfig budget |
Always authenticate A2A endpoints. An unauthenticated A2A server allows
anyone to invoke your agents, consuming your LLM API credits.
Next steps