Chaos proxy for testing how your agents handle failures. Sits between your agent and the LLM/MCP server, injects faults.
Agent --> AgentBreak (localhost:5005) --> Real LLM / MCP server
^
.agentbreak/scenarios.yaml defines faults
pip install agentbreak
agentbreak init # creates .agentbreak/ with default configs
agentbreak serve # start the chaos proxyPoint your agent at http://localhost:5005 instead of the real API:
- OpenAI SDK: set
OPENAI_BASE_URL=http://localhost:5005/v1 - Anthropic SDK: set
ANTHROPIC_BASE_URL=http://localhost:5005
Check results:
curl localhost:5005/_agentbreak/scorecard.agentbreak/application.yaml -- what to proxy:
llm:
enabled: true
mode: mock # mock (no API key needed) or proxy (forwards to upstream)
mcp:
enabled: false # set true + upstream_url for MCP testing
serve:
port: 5005.agentbreak/scenarios.yaml -- what faults to inject:
version: 1
scenarios:
- name: slow-llm
summary: Latency spike on completions
target: llm_chat
fault:
kind: latency
min_ms: 2000
max_ms: 5000
schedule:
mode: random
probability: 0.3Or use a preset: brownout, mcp-slow-tools, mcp-tool-failures, mcp-mixed-transient.
http_error, latency, timeout (MCP only), empty_response, invalid_json, schema_violation, wrong_content, large_response
agentbreak inspect # discover tools from upstream MCP server
agentbreak serve # proxy both LLM and MCP trafficagentbreak init # create .agentbreak/ config
agentbreak serve # start proxy
agentbreak validate # check config
agentbreak inspect # discover MCP tools
agentbreak verify # run testsnpx skills add mnvsk97/agentbreakThen use /agentbreak to chaos-test your agent with a guided workflow.
See examples/ for sample agents and MCP servers with various auth configs.