Request Lifecycle Step-by-step flow from client to server and back
This page traces a single request through every stage of ProxyServer, from the moment a client sends it to when the response arrives back. We'll cover both HTTP and HTTPS paths.
HTTP Request Flow
When a client sends a plain HTTP request through the proxy:
Client ProxyServer Upstream
GET http://example.com/api ──►
│
① Create TrafficEntry
state: pending
trafficStore.add(entry)
│
② Collect request body
Buffer chunks, cap at 2MB
entry.setRequestBody(buf)
│
③ Request intercept check
│
┌─────────┴──────────────────┐
│ │
No rule match Rule matches
(or intercept OFF) + intercept ON
│ │
│ state: intercepted
│ phase: 'request'
│ │
│ Dashboard modal
│ User: Forward/Drop
│ │
├────────────────────────────┘
│
④ Forward to upstream
state: forwarded
│
│── HTTP request ──────────────►
│ │
│◄── HTTP response ────────────│
│
⑤ Buffer full response
Decompress gzip/br/deflate
entry.setResponse(...)
entry.setResponseBody(buf)
│
⑥ Response intercept check
│
(same pattern as request)
│
⑦ Forward response to client
state: completed
│
◄── HTTP response ─┘
│
⑧ WSBridge broadcasts update
Dashboard UI updates in real time
Key Points
- The proxy requires absolute-URI format (e.g.,
GET http://example.com/path) — this is the standard for forward proxies per HTTP/1.1 spec - Proxy-specific headers (
proxy-connection,proxy-authorization) are stripped before forwarding - Body decompression happens for display only — the original compressed bytes are forwarded to the client
- Every state change calls
trafficStore.update(entry.id)which triggers a WebSocket broadcast
HTTPS Request Flow
HTTPS adds a TLS layer with certificate generation:
Client ProxyServer Upstream
CONNECT example.com:443 ──►
│
① CONNECT handler
Delegates to TLSHandler
│
◄── 200 Established ────│
│
══ TLS Handshake ═══════│
Client ↔ Proxy │
Proxy presents cert ② CertManager.getHostCert("example.com")
signed by our CA Check memory cache → disk cache → generate
│
── GET /api (encrypted) ►│
│
③ HTTPStreamParser.readRequest()
Reads headers until \r\n\r\n
Reads body per Content-Length or chunked
│
④ Create TrafficEntry
URL: https://example.com/api
protocol: 'https'
trafficStore.add(entry)
│
⑤ Request intercept check
(same as HTTP flow)
│
⑥ Try HTTP/2 upstream
│
┌─────────┴──────────┐
│ │
H2 ALPN success H2 fail → H1 fallback
http2.connect() tls.connect()
:method, :path GET /api HTTP/1.1
│ │
├────────────────────┘
│
│──── request ──────────────────────►
│ │
│◄─── response ─────────────────────│
│
⑦ Process response
Decompress, capture body
entry.setResponse(...)
│
⑧ Response intercept check
│
⑨ _writeHTTP1Response()
Translate H2 → H1 if needed
Strip pseudo-headers (:status, etc.)
Set content-length
│
◄── response (encrypted) ─│
│
⑩ Keep-alive loop
If Connection: close → end tunnel
Else → processNextRequest()
Key Differences from HTTP
- CONNECT tunnel — The client first opens a raw TCP tunnel via
CONNECT, then does TLS inside it - Certificate generation — A unique cert is generated (or cached) for each hostname
- Two TLS sessions — Client↔Proxy and Proxy↔Upstream are separate encrypted channels
- HTTP/2 upstream — The proxy tries H2 first via ALPN negotiation. An
h2badge appears in the dashboard for H2 responses - Keep-alive — Multiple requests can flow over a single CONNECT tunnel. The
HTTPStreamParserreads one request at a time and loops - H2→H1 translation — HTTP/2 pseudo-headers (
:status,:method) are stripped and the response is reconstructed as HTTP/1.1 for the client
TrafficEntry State Machine
┌─────────────────────────────────────────────┐
│ │
pending ─────────────────────────────────────── │ ──► error
│ │ (upstream fail)
├─── rule match + intercept ON │
│ │
intercepted │
│ │
┌───┴───┐ │
│ │ │
Forward Drop ────────────────────────────────── │ ──► aborted
│ │
forwarded │
│ │
├─── response received + response rule match │
│ │
intercepted (phase: 'response') │
│ │
├─── Forward │
│ │
completed ◄─────────────────────────────────────────┘
(timing.duration calculated)
Timing Breakdown
Client sends Proxy forwards Server responds Client receives
request to upstream (first byte) complete response
│ │ │ │
├────────────────────┤ │ │
│ proxy overhead │ │ │
│ ├────────────────────┤ │
│ │ TTFB │ │
│ │ (time to first byte)│ │
│ │ ├─────────────────────┤
│ │ │ transfer time │
│ │ │ │
├────────────────────┴────────────────────┴─────────────────────┤
│ total duration │
│ timing.start → timing.end │
| Field | Measured At | Description |
|---|---|---|
timing.start | TrafficEntry constructor | When the proxy first receives the request |
timing.ttfb | setResponse() | Time from start to first byte of upstream response |
timing.end | complete() / fail() | When the response has been fully forwarded to client |
timing.duration | Calculated | end - start in milliseconds |
Body Capture Limits
Both request and response bodies are captured up to 2MB (TrafficEntry.MAX_BODY_CAPTURE = 2 * 1024 * 1024). This limit applies only to what's stored for display in the dashboard — the full body is always forwarded to its destination regardless of size.
| Scenario | Stored in Entry | Forwarded to Client/Server |
|---|---|---|
| 500KB JSON response | Full 500KB (decompressed) | Original bytes (possibly compressed) |
| 10MB file download | First 2MB + bodyTruncated: true | Full 10MB |
| Empty body | null | Nothing |