A ready-to-run example is available here!
ACPAgent lets you use any Agent Client Protocol server as the backend for an OpenHands conversation. Instead of calling an LLM directly, the agent spawns an ACP server subprocess and communicates with it over JSON-RPC. The server manages its own LLM, tools, and execution — your code just sends messages and collects responses.
Basic Usage
acp_command is the shell command used to spawn the server process. The SDK communicates with it over stdin/stdout JSON-RPC.
Key difference from standard agents: With
ACPAgent, you don’t need an LLM_API_KEY in your code. The ACP server handles its own LLM authentication and API calls. This is delegation — your code sends messages to the ACP server, which manages all LLM interactions internally.What ACPAgent Does Not Support
Because the ACP server manages its own tools and context, theseAgentBase features are not available on ACPAgent:
tools/include_default_tools— the server has its own toolsmcp_config— configure MCP on the server sidecondenser— the server manages its own context windowcritic— the server manages its own evaluationagent_context— configure the server directly
NotImplementedError at initialization.
How It Works
ACPAgentspawns the ACP server as a subprocess- The SDK initializes the ACP protocol and creates a session
- When you call
conversation.send_message(...), the message is forwarded to the ACP server viaconn.prompt() - The server processes the request using its own LLM and tools, streaming session updates (text chunks, thought chunks, tool calls) back to the SDK
- The SDK accumulates the response and emits it as a
MessageEvent - Permission requests from the server are auto-approved — this means the SDK automatically grants any tool execution or file access the server requests, so ensure you trust the ACP server you’re running
- Token usage and cost metrics from the ACP server are captured into the agent’s
LLM.metrics
Configuration
Server Command and Arguments
| Parameter | Description |
|---|---|
acp_command | Command to start the ACP server (required) |
acp_args | Additional arguments appended to the command |
acp_env | Additional environment variables for the server process |
Metrics
Token usage and cost data are automatically captured from the ACP server’s responses. You can inspect them through the standardLLM.metrics interface:
PromptResponse.usage— per-turn token counts (input, output, cached, reasoning tokens)UsageUpdatenotifications — cumulative session cost and context window size
Cleanup
Always callagent.close() when you are done to terminate the ACP server subprocess. A try/finally block is recommended:
Ready-to-run Example
This example is available on GitHub: examples/01_standalone_sdk/40_acp_agent_example.py
examples/01_standalone_sdk/40_acp_agent_example.py
Next Steps
- Creating Custom Agents — Build specialized agents with custom tool sets and system prompts
- Agent Delegation — Compose multiple agents for complex workflows
- LLM Metrics — Track token usage and costs across models

