AI Chat Agent Ask an AI about your traffic, generate rules, and get curl commands
The dashboard includes an embedded AI chat agent powered by Claude Code. It can see your captured traffic, understand the proxy's architecture, answer questions, and take actions like creating intercept rules or generating curl commands.
Prerequisites
- Claude Code CLI must be installed and accessible as
claudein your PATH - If the CLI is not found, the chat panel shows a graceful "not available" message
The Chat Panel
The panel is fixed to the bottom-right corner. When collapsed, it shows as a small pill button reading "AI Chat" with a green dot if Claude is available.
Context Toggles
You control exactly what information the agent sees using the checkbox toggles in the panel header. Each toggle includes a section of data in the system prompt sent to Claude:
| Toggle | What It Includes | Token Cost |
|---|---|---|
| Traffic | Summary table of the most recent 200 captured requests (method, status, host, path, type, size, time) | Medium (~2-5k tokens) |
| Selected | Full detail of the currently selected traffic entry (headers, bodies, timing). Select an entry in the traffic list first. | Variable (depends on body size, max ~10k tokens) |
| Rules | All current intercept rules as JSON | Low (~100-500 tokens) |
| Source | Key ProxyServer source files — server.js, proxy-server.js, tls-handler.js, traffic-store.js, etc. | High (~15-25k tokens) |
| Browser | Cookies and localStorage from the browser extension (if installed) | Variable |
Token Budget Bar
The colored bar below the toggles shows the approximate token breakdown of the current context. Each segment represents one context block, and the total is displayed as "~Nk tokens". This helps you understand how much context you're sending per message.
What You Can Ask
Traffic Analysis
- "How many requests have been captured?"
- "Show me all POST requests to /api endpoints"
- "Which requests returned 4xx or 5xx errors?"
- "What's the average response time?"
- "Summarize the traffic patterns I'm seeing"
Request Details
(Toggle "Selected" ON and click a request first)
- "What content type is this response?"
- "Explain the headers in this request"
- "Generate a curl command for this request"
- "What does this response body contain?"
Rule Management
- "Create a rule to intercept all POST requests to *api*"
- "What intercept rules are currently active?"
- "Delete the rule matching /login"
Architecture Questions
(Toggle "Source" ON)
- "How does the ring buffer work in traffic-store.js?"
- "Explain how TLS MITM interception is implemented"
- "How does the request interceptor hold promises?"
- "What happens when the traffic store reaches 5000 entries?"
Slash Commands
| Command | Action |
|---|---|
/reset | Clear the conversation history and start fresh |
/compact | Summarize the conversation to reduce context size |
/help | Show available commands and tips |
How It Works Internally
- User types a message in the chat panel
- Chat.js sends a
chat:sendWebSocket message with the text, selected entry ID, and context toggles - WSBridge routes the
chat:*message to ChatHandler - ChatHandler calls ContextBuilder to assemble the system prompt from enabled context blocks
- ClaudeSession spawns
claude -p <prompt> --output-format stream-jsonas a subprocess - Streaming JSON events are parsed and forwarded as
chat:chunkmessages to the client - When the process completes, a
chat:donemessage is sent with the full response - Conversation history is maintained server-side and replayed as context in subsequent turns
Browser Extension (Optional)
An optional Chrome extension in the extension/ directory sends the current page's cookies and localStorage to the proxy, making them available to the AI agent.
Installing
- Open
chrome://extensions/ - Enable "Developer mode"
- Click "Load unpacked" and select the
extension/directory - Click the extension icon and "Send Context" while on any page
Once sent, toggle "Browser" ON in the chat panel to include this data in the AI's context.