The missing command tracer for Valkey and Redis.
valkey-trace is a transparent TCP proxy that sits between your application and Valkey/Redis server, capturing every command in structured NDJSON with zero application changes required. Built for debugging production hotspots, building regression test suites from real traffic, and understanding what your client library is actually doing.
┌──────────────┐ RESP2/3 ┌─────────────────────┐ RESP2/3 ┌──────────────┐
│ Your App │ ────────────────▶ │ valkey-trace │ ────────────────▶ │ Valkey / │
│ (client) │ ◀──────────────── │ 127.0.0.1:6479 │ ◀──────────────── │ Redis │
└──────────────┘ └─────────────────────┘ └──────────────┘
│
▼
NDJSON event stream
(stdout or file)
│
┌─────────┴─────────┐
│ • AI annotation │
│ • Flamegraph │
│ • Replay engine │
└───────────────────┘
cargo install valkey-traceOr build from source:
git clone https://github.com/agent-cairn/valkey-trace
cd valkey-trace
cargo build --release
./target/release/valkey-trace --helpPoint valkey-trace at your Valkey/Redis server, then connect your app to the proxy port:
# Start the proxy (terminal 1)
valkey-trace 127.0.0.1:6379 --listen 127.0.0.1:6479
# Connect your app to :6479 instead of :6379
# Commands stream to stdout as NDJSON:Example output:
{"ts":"2025-03-29T10:00:00.123Z","seq":1,"session_id":"a1b2c3...","client_addr":"127.0.0.1:54321","command":"SET","args":["user:42","{'name':'Alice'}"],"args_raw_bytes":18,"resp_type":"simple_string","response_preview":"OK","latency_us":312,"db":0,"glide_detected":false,"annotations":[]}
{"ts":"2025-03-29T10:00:00.456Z","seq":2,"session_id":"a1b2c3...","client_addr":"127.0.0.1:54321","command":"GET","args":["user:42"],"args_raw_bytes":7,"resp_type":"bulk_string","response_preview":"{'name':'Alice'}","latency_us":198,"db":0,"glide_detected":false,"annotations":[]}
{"ts":"2025-03-29T10:00:01.001Z","seq":3,"session_id":"a1b2c3...","client_addr":"127.0.0.1:54321","command":"HGETALL","args":["session:99"],"args_raw_bytes":10,"resp_type":"array","response_preview":"[6 items]","latency_us":2104,"db":0,"glide_detected":false,"annotations":[]}Structured newline-delimited JSON. Pipe into jq, Elasticsearch, Loki, or any log ingestion pipeline:
valkey-trace 127.0.0.1:6379 -f ndjson | jq 'select(.latency_us > 1000)'Color-coded terminal output with inline latency indicators:
valkey-trace 127.0.0.1:6379 -f pretty
# [10:00:00.123] 0.312ms [fast] SET → OK user:42 ...
# [10:00:01.001] 2.104ms [SLOW] HGETALL → [6 items] session:99Aggregate command distributions, emit folded stacks on exit for use with inferno:
valkey-trace 127.0.0.1:6379 -f flamegraph | inferno-flamegraph > trace.svg# Only capture slow commands
valkey-trace --latency-threshold 5000 # > 5ms only
# Only capture GET/MGET commands
valkey-trace --filter "GET*"
# 10% sample rate for high-traffic production
valkey-trace --sample-rate 0.1
# Write to file
valkey-trace -o trace.ndjsonSet OPENAI_API_KEY or ANTHROPIC_API_KEY and add --ai:
OPENAI_API_KEY=sk-... valkey-trace --ai -f pretty
# [10:00:00.123] 0.312ms [fast] SET → OK user:42
# [AI] Frequent cache-aside write pattern — consider SETEX with TTL to prevent unbounded growthEvents are batched (50 events or 10 seconds) and annotated in bulk to minimize API calls. Gracefully degraded if no key is set.
Capture traffic, then replay against a different server to compare latency (staging vs. production, before/after tuning):
# Capture baseline
valkey-trace -o baseline.ndjson
# Replay against staging
valkey-trace 127.0.0.1:6380 --replay baseline.ndjson
# Output:
# Replay Summary
# total: 1024
# succeeded: 1021
# failed: 3
# avg orig latency: 312µs
# avg replay latency: 287µs
# latency delta: -8.0%Detect when a connection is from a valkey-glide client:
valkey-trace --glide-detect -f ndjson | jq 'select(.glide_detected == true)'Glide detection works by recognizing the characteristic HELLO 3 → CLIENT SETNAME glide-... → CLIENT NO-EVICT initialization sequence.
Usage: valkey-trace [OPTIONS] [TARGET]
Arguments:
[TARGET] Target Valkey/Redis server address [default: 127.0.0.1:6379]
Options:
-l, --listen <LISTEN> Local proxy listen address [default: 127.0.0.1:6479]
-o, --output <OUTPUT> Output file (default: stdout)
-f, --format <FORMAT> Output format: ndjson, pretty, flamegraph [default: ndjson]
--filter <FILTER> Filter commands by glob pattern (e.g. "GET*")
--ai Enable AI annotation (requires OPENAI_API_KEY or ANTHROPIC_API_KEY)
--sample-rate <SAMPLE_RATE> Sample rate 0.0-1.0 [default: 1.0]
--latency-threshold <US> Only emit events with latency above threshold (microseconds)
--replay <REPLAY> Replay a trace file against the target server
--glide-detect Enable valkey-glide client fingerprinting
-q, --quiet Quiet mode
-h, --help Print help
-V, --version Print version
The Valkey/Redis ecosystem lacks a proper observability proxy:
| Tool | Captures commands | RESP3 | Structured output | Replay | AI annotation |
|---|---|---|---|---|---|
redis-monitor |
Yes | No | No | No | No |
tcpdump |
Yes (raw) | No | No | No | No |
slowlog |
Only slow | No | Partial | No | No |
| valkey-trace | Yes | Yes | Yes (NDJSON) | Yes | Yes |
valkey-trace is designed for:
- Debugging production hotspots without modifying application code
- Building regression test suites from real traffic
- Comparing latency between Valkey versions or server configs
- Understanding client library behavior (especially valkey-glide internals)
- Feeding AI tooling with structured command traces
{
ts: string; // ISO 8601 timestamp
seq: number; // Monotonic sequence number (global)
session_id: string; // UUID per TCP connection
client_addr: string; // Client IP:port
command: string; // Command name, uppercased (e.g. "GET")
args: string[]; // Command arguments
args_raw_bytes: number; // Total byte size of all arguments
resp_type: string; // Response RESP type (bulk_string, array, integer, etc.)
response_preview: string;// Truncated response preview (64 chars)
latency_us: number; // Round-trip latency in microseconds
db: number; // Active database (0-15)
glide_detected: boolean; // Whether connection is from valkey-glide
annotations: string[]; // AI-generated annotations
}MIT — see LICENSE