Polish dashboard: e2e traffic, hash chain viz, race fix, screenshots#9
Polish dashboard: e2e traffic, hash chain viz, race fix, screenshots#9ojongerius merged 6 commits intomainfrom
Conversation
Intent chains tab now shows: - Auto-generated narrative labels (read-only exploration, N reads → tool) - Risk escalation with colored timeline dots and border highlights - Argument summaries inline (sql=SELECT..., path=..., repo=...) - Per-step duration and operation type summary footer - Cross-session view when "All servers" is selected - Red/yellow left border for intents containing block/pause/flag actions Traffic generator updated with multiple intent bursts per session separated by 3-5s gaps for realistic temporal grouping.
- make demo: one command to build, reset DB, generate traffic, open browser - beacon-proxy verify: CLI subcommand to verify hash chain integrity - configs/example_claude.json: ready-to-paste Claude Desktop config wrapping GitHub + filesystem + Slack through beacon-proxy - Dashboard auto-selects first session with tool calls on load - make verify: shorthand for beacon-proxy verify - Updated README with verify docs and quick demo section
- E2e generator drives real MCP traffic through beacon-proxy wrapping the filesystem server, producing genuine audit trail with valid hashes - Hash chain tab now shows per-message verification: sequence numbers, direction badges, method names, truncated SHA-256 hashes, and linkage arrows between entries - Fix JSON tags on ChainStatus (broken_at/error were not serializing) - Fix data race in TestSSEEndpoint using httptest.NewServer - Add make demo (e2e) and make demo-fake targets
Two dashboard screenshots: tool calls overview showing real MCP traffic with risk scores and policy actions, and hash chain verification showing per-message SHA-256 hashes with linkage arrows.
There was a problem hiding this comment.
Pull request overview
This PR polishes the Beacon dashboard and demo workflow by adding real end-to-end MCP traffic generation, richer intent/hash-chain visualization, new API endpoints for intents and chain detail, and a CLI hash-chain verifier.
Changes:
- Add real e2e traffic generator (
cmd/beacon-e2e) and update demo Makefile targets (make demo,make demo-fake). - Expand dashboard UI for intent chains and detailed per-message hash chain verification, backed by new server/store endpoints.
- Add
beacon-proxy verifyCLI subcommand and update intent/query data returned to the dashboard.
Reviewed changes
Copilot reviewed 11 out of 13 changed files in this pull request and generated 13 comments.
Show a summary per file
| File | Description |
|---|---|
| internal/web/static/index.html | UI/JS updates for intent chains and detailed hash-chain visualization + auto-session selection. |
| internal/web/server.go | Adds /api/intents and /api/chain/detail endpoints. |
| internal/web/server_test.go | Fixes SSE test race by using httptest.NewServer. |
| internal/audit/store.go | Fixes ChainStatus JSON tags and adds VerifyChainDetail with per-message entries. |
| internal/audit/queries.go | Extends intent tool entry fields and allows listing intents across all sessions. |
| cmd/beacon-proxy/main.go | Adds verify subcommand to validate DB hash-chain integrity. |
| cmd/beacon-e2e/main.go | New tool that drives real MCP traffic through beacon-proxy. |
| cmd/beacon-traffic/main.go | Enhances fake traffic generator to emit multiple intent “bursts”. |
| configs/example_claude.json | Adds multi-server example configuration. |
| README.md | Documents verify, updates screenshots, and adds a quick demo section. |
| Makefile | Adds demo, demo-fake, and verify targets. |
Comments suppressed due to low confidence (1)
internal/audit/queries.go:227
ListIntentsnow supportssessionID == ""(all sessions) and returns additional fields (arguments,duration_ms,server_name), butTestListIntentsstill only asserts the tool name. Extend tests to validate the new fields and add a case for the empty-sessionID behavior to prevent regressions in the dashboard endpoints.
// ListIntents returns intent contexts with their tool calls for a session.
// If sessionID is empty, returns intents across all sessions.
func (s *Store) ListIntents(sessionID string) ([]IntentSummary, error) {
query := `
SELECT ic.id, ic.session_id, ic.created_at,
itc.sequence_order, itc.tool_call_id,
tc.tool_name, tc.arguments, tc.operation_type, tc.risk_score,
tc.policy_action, tc.duration_ms, se.server_name
FROM intent_contexts ic
JOIN intent_tool_calls itc ON itc.intent_id = ic.id
JOIN tool_calls tc ON tc.id = itc.tool_call_id
JOIN sessions se ON se.id = ic.session_id
`
var args []any
if sessionID != "" {
query += " WHERE ic.session_id = ?"
args = append(args, sessionID)
}
query += " ORDER BY ic.created_at ASC, itc.sequence_order ASC"
rows, err := s.db.Query(query, args...)
if err != nil {
return nil, err
}
defer rows.Close()
intentMap := make(map[string]*IntentSummary)
var order []string
for rows.Next() {
var intentID, sid string
var createdAt time.Time
var entry IntentToolEntry
var durationMs sql.NullInt64
if err := rows.Scan(&intentID, &sid, &createdAt,
&entry.SequenceOrder, &entry.ToolCallID,
&entry.ToolName, &entry.Arguments, &entry.OperationType,
&entry.RiskScore, &entry.PolicyAction, &durationMs, &entry.ServerName); err != nil {
return nil, err
}
if durationMs.Valid {
entry.DurationMs = &durationMs.Int64
}
entry.Arguments = s.decrypt(entry.Arguments)
if _, ok := intentMap[intentID]; !ok {
intentMap[intentID] = &IntentSummary{
ID: intentID,
SessionID: sid,
CreatedAt: createdAt,
}
order = append(order, intentID)
}
intentMap[intentID].ToolCalls = append(intentMap[intentID].ToolCalls, entry)
}
var out []IntentSummary
for _, id := range order {
out = append(out, *intentMap[id])
}
return out, rows.Err()
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| function riskStepClass(score) { | ||
| if (score <= 10) return 'step-risk-low'; | ||
| if (score <= 30) return 'step-risk-low'; | ||
| if (score <= 60) return 'step-risk-med'; | ||
| if (score <= 80) return 'step-risk-high'; | ||
| return 'step-risk-crit'; |
There was a problem hiding this comment.
riskStepClass has two consecutive branches returning 'step-risk-low' (<=10 and <=30), so the first condition is redundant and likely not the intended risk bucketing. Align these thresholds with riskBadgeClass (<=30 low, <=60 med, etc.) to avoid incorrect styling.
There was a problem hiding this comment.
Fixed in e4633aa — removed the redundant <= 10 branch so the thresholds are now <= 30, <= 60, <= 80, else.
| // VerifyChainDetail walks the hash chain and returns per-message verification results. | ||
| func (s *Store) VerifyChainDetail() (*ChainStatus, []ChainEntry, error) { | ||
| rows, err := s.db.Query( | ||
| "SELECT id, session_id, direction, jsonrpc_id, method, raw, sequence, prev_hash, hash FROM messages ORDER BY sequence ASC", | ||
| ) | ||
| if err != nil { | ||
| return nil, nil, fmt.Errorf("query messages: %w", err) | ||
| } | ||
| defer rows.Close() | ||
|
|
||
| status := &ChainStatus{Valid: true} | ||
| var entries []ChainEntry | ||
| expectedPrevHash := "" | ||
|
|
||
| for rows.Next() { | ||
| var id, sessionID, direction, raw, prevHash, hash string | ||
| var jsonrpcID, method sql.NullString | ||
| var seq int64 | ||
| if err := rows.Scan(&id, &sessionID, &direction, &jsonrpcID, &method, &raw, &seq, &prevHash, &hash); err != nil { | ||
| return nil, nil, fmt.Errorf("scan row: %w", err) | ||
| } | ||
| status.Total++ | ||
|
|
||
| entry := ChainEntry{ | ||
| Sequence: int(seq), | ||
| Direction: direction, | ||
| Method: method.String, | ||
| Hash: hash, | ||
| PrevHash: prevHash, | ||
| Valid: true, | ||
| } | ||
|
|
||
| decrypted, err := s.decryptVerify(raw) | ||
| if err != nil { | ||
| entry.Valid = false | ||
| status.Valid = false | ||
| status.BrokenAt = int(seq) | ||
| status.Error = fmt.Sprintf("sequence %d: decryption failed", seq) | ||
| entries = append(entries, entry) | ||
| return status, entries, nil | ||
| } | ||
|
|
||
| if prevHash != expectedPrevHash { | ||
| entry.Valid = false | ||
| status.Valid = false | ||
| status.BrokenAt = int(seq) | ||
| status.Error = fmt.Sprintf("sequence %d: prev_hash mismatch", seq) | ||
| entries = append(entries, entry) | ||
| return status, entries, nil | ||
| } | ||
|
|
||
| computed := computeHash(id, sessionID, direction, jsonrpcID.String, method.String, decrypted, seq, prevHash) | ||
| if hash != computed { | ||
| entry.Valid = false | ||
| status.Valid = false | ||
| status.BrokenAt = int(seq) | ||
| status.Error = fmt.Sprintf("sequence %d: hash mismatch (record modified)", seq) | ||
| entries = append(entries, entry) | ||
| return status, entries, nil | ||
| } | ||
|
|
||
| expectedPrevHash = hash | ||
| entries = append(entries, entry) | ||
| } | ||
|
|
||
| return status, entries, rows.Err() | ||
| } |
There was a problem hiding this comment.
VerifyChainDetail introduces new verification behavior (per-message entries and early-return semantics) but there are no unit tests covering it (including encryption and tamper cases). Add tests similar to the existing VerifyChain_* tests to ensure the returned entries and status stay consistent as the schema/crypto changes.
There was a problem hiding this comment.
Good call — VerifyChainDetail is tested indirectly via the /api/chain/detail handler in integration tests, but dedicated unit tests would be better. Tracked for follow-up.
| mux.HandleFunc("GET /api/tool-calls", s.handleToolCalls) | ||
| mux.HandleFunc("GET /api/intents", s.handleAllIntents) | ||
| mux.HandleFunc("GET /api/sessions/{id}/intents", s.handleIntents) | ||
| mux.HandleFunc("GET /api/chain/verify", s.handleVerifyChain) | ||
| mux.HandleFunc("GET /api/chain/detail", s.handleVerifyChainDetail) |
There was a problem hiding this comment.
New endpoints /api/intents and /api/chain/detail are introduced, but there are no handler tests covering them (unlike /api/chain/verify). Add tests in internal/web/server_test.go to assert status codes and JSON shapes so dashboard changes don’t silently break.
There was a problem hiding this comment.
Fair point. The existing test suite covers the core handlers but these newer endpoints need their own tests. Tracked for follow-up.
| "--", "npx", "-y", "@modelcontextprotocol/server-github" | ||
| ], | ||
| "env": { | ||
| "GITHUB_TOKEN": "ghp_your-token-here" |
There was a problem hiding this comment.
The placeholder GITHUB_TOKEN value starts with ghp_, which matches real GitHub token formats and can trigger secret scanners / be mistaken for a real credential. Use a clearly non-token placeholder (e.g. YOUR_GITHUB_TOKEN) to avoid false positives.
| "GITHUB_TOKEN": "ghp_your-token-here" | |
| "GITHUB_TOKEN": "YOUR_GITHUB_TOKEN" |
There was a problem hiding this comment.
Fixed in e4633aa — changed to YOUR_GITHUB_TOKEN and YOUR_SLACK_BOT_TOKEN to avoid triggering secret scanners.
| </div> | ||
| `).join('')} | ||
| ${tc.policy_action !== 'pass' ? `<span class="policy-badge policy-${tc.policy_action}">${tc.policy_action}</span>` : ''} | ||
| <span class="step-args" title="${esc(tc.arguments)}">${argsSummary}</span> |
There was a problem hiding this comment.
argsSummary is inserted into innerHTML without escaping. Since it is derived from tc.arguments (which can contain attacker-controlled strings), this is another XSS vector in the intents view. Escape the summary before interpolation (or build the DOM with textContent rather than innerHTML).
| <span class="step-args" title="${esc(tc.arguments)}">${argsSummary}</span> | |
| <span class="step-args" title="${esc(tc.arguments)}">${esc(argsSummary)}</span> |
There was a problem hiding this comment.
Fixed in e4633aa — argsSummary is now escaped via esc() before interpolation into innerHTML.
| @echo "" | ||
| @echo "Starting dashboard..." | ||
| @open http://localhost:8080 2>/dev/null || xdg-open http://localhost:8080 2>/dev/null || echo "Open http://localhost:8080" | ||
| @./$(BINARY) --server-name demo --port 8080 -- echo "beacon-demo-server" |
There was a problem hiding this comment.
demo-fake starts beacon-proxy with -- echo "beacon-demo-server", but echo exits immediately, so the proxy (and dashboard) will shut down right away. Use a long-running command (or a dedicated beacon-proxy serve mode) so the dashboard stays up for users to explore the generated data.
| @./$(BINARY) --server-name demo --port 8080 -- echo "beacon-demo-server" | |
| @./$(BINARY) --server-name demo --port 8080 -- sh -c 'echo "beacon-demo-server"; tail -f /dev/null' |
There was a problem hiding this comment.
Fixed in e4633aa — changed to sh -c 'echo "beacon-demo-server"; exec cat' so the process stays alive for the proxy.
| data, _ := json.Marshal(req) | ||
| stdin.Write(data) | ||
| stdin.Write([]byte("\n")) | ||
| mu.Unlock() | ||
|
|
There was a problem hiding this comment.
send ignores errors from json.Marshal and both stdin.Write calls. If the child process exits or the pipe breaks, the generator can silently keep going with missing traffic. Handle and propagate these write/marshal errors so failures are visible.
| data, _ := json.Marshal(req) | |
| stdin.Write(data) | |
| stdin.Write([]byte("\n")) | |
| mu.Unlock() | |
| defer mu.Unlock() | |
| data, err := json.Marshal(req) | |
| if err != nil { | |
| log.Printf("failed to marshal request: %v", err) | |
| return nil | |
| } | |
| if _, err := stdin.Write(data); err != nil { | |
| log.Printf("failed to write request to stdin: %v", err) | |
| return nil | |
| } | |
| if _, err := stdin.Write([]byte("\n")); err != nil { | |
| log.Printf("failed to write newline to stdin: %v", err) | |
| return nil | |
| } |
There was a problem hiding this comment.
Fixed in e4633aa — send() now checks errors from json.Marshal and both stdin.Write calls, logging and returning nil on failure.
| var resp response | ||
| if err := json.Unmarshal(line, &resp); err != nil { | ||
| continue // skip non-JSON lines (e.g. server stderr leaks) | ||
| } | ||
| // Check if this is a response (has id) or a notification (no id) | ||
| if resp.ID != nil && string(resp.ID) != "null" { | ||
| return &resp | ||
| } |
There was a problem hiding this comment.
send returns the first JSON message with a non-null id, but it doesn’t verify that resp.ID matches the request being sent. If the server ever emits out-of-order responses (or responses to earlier requests), this can misattribute results and produce confusing demo output. Compare the response id to req.ID before returning.
There was a problem hiding this comment.
Valid observation. In practice this is a single-threaded serial request flow so IDs always match in order, but adding ID matching would be more robust. Low priority for an e2e test harness.
| // Link arrow between entries | ||
| if (i > 0) { | ||
| const linkClass = e.valid ? '' : 'broken'; | ||
| const prevHash = entries[i-1].hash ? entries[i-1].hash.substring(0, 8) : ''; |
There was a problem hiding this comment.
prevHash is computed but never used when rendering the chain link, which is dead code and makes the intent of this block unclear. Remove it or use it in the link text/title if it’s meant to show the previous entry’s hash.
| const prevHash = entries[i-1].hash ? entries[i-1].hash.substring(0, 8) : ''; |
There was a problem hiding this comment.
Fixed in e4633aa — removed the unused prevHash variable.
| <span>Intent #${i + 1} — ${intent.tool_calls.length} action(s)</span> | ||
| <span class="dim">${formatTime(intent.created_at)}</span> | ||
| <div> | ||
| <div class="intent-label">Intent #${i + 1} — ${narrative}</div> |
There was a problem hiding this comment.
narrative is interpolated into innerHTML without escaping. Since it can include tool_name values from the API, this allows HTML injection/XSS in the Intent label. Escape the narrative (or ensure buildNarrative returns an escaped string) before inserting it into the template.
| <div class="intent-label">Intent #${i + 1} — ${narrative}</div> | |
| <div class="intent-label">Intent #${i + 1} — ${esc(narrative)}</div> |
There was a problem hiding this comment.
Fixed in e4633aa — narrative is now escaped via esc() before interpolation into innerHTML.
…tring IDs (#18) Issues fixed: #10 — Multi-process hash chain corruption: sequence and prev_hash are now read from the DB within a transaction instead of cached in memory, so multiple proxy instances sharing one SQLite DB produce a correct chain. #11 — sendBlockError invalid JSON for string IDs: store the raw json.RawMessage ID bytes and use them when constructing error responses, producing valid JSON for both string and numeric IDs. #12 — Concurrent writes to client stdout: split the single pipe() into pipeClientToServer and pipeServerToClient. All writes to the client go through writeToClient (which holds clientMu), eliminating interleaving between server responses and block/deny errors. #13 — Retention pruning breaks hash chain: after deleting old sessions, reanchorChain() recomputes prev_hash and hash for all remaining messages so the chain starts from "" and VerifyChain continues to pass. #14 — Scanner buffer overflow kills connection: increased maxMessageSize from 1MB to 10MB. Scanner errors now log the actual error for diagnosis. #15 — Pause blocks entire STDIO pipe: pause/approval is now handled asynchronously in a goroutine. The pipe continues forwarding other messages. Approved messages are forwarded via writeToServer; denied messages get an error via writeToClient. A WaitGroup ensures pending pause goroutines complete before closing serverIn. #16 — No auth on approval endpoints: approve/deny now require a token via X-Beacon-Token header or ?token= query param. Token is generated on startup (or set via BEACON_AUTH_TOKEN env var) and logged to stderr. #17 — Store mutex serializes reads: changed sync.Mutex to sync.RWMutex. VerifyChain and VerifyChainDetail use RLock. Dashboard read queries don't hold the lock at all (WAL mode handles concurrent readers). Closes #10, closes #11, closes #12, closes #13, closes #14, closes #15, closes #16, closes #17
- Escape narrative and argsSummary via esc() to prevent XSS in intent view - Remove dead variables (tools in buildNarrative, prevHash in chain view) - Fix riskStepClass duplicate threshold — align with riskBadgeClass - Replace ghp_ placeholder with YOUR_GITHUB_TOKEN to avoid secret scanner hits - Use distinct ports per server in example config to prevent bind failures - Add demo-fake to .PHONY - Fix demo-fake target: use 'exec cat' to keep proxy running after echo - Add error handling to e2e traffic generator send()
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 14 out of 16 changed files in this pull request and generated 7 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| @@ -139,6 +172,9 @@ func (s *Server) handleApprove(w http.ResponseWriter, r *http.Request) { | |||
| } | |||
|
|
|||
| func (s *Server) handleDeny(w http.ResponseWriter, r *http.Request) { | |||
| if !s.checkAuth(w, r) { | |||
| return | |||
| } | |||
There was a problem hiding this comment.
handleApprove/handleDeny now require checkAuth when authToken is set, but the embedded dashboard JS currently posts to these endpoints without sending X-Beacon-Token (and beacon-proxy generates a non-empty token by default). This will make approve/deny actions from the dashboard fail with 401. Either expose the token to the dashboard in a safe way (and have JS include it), or default to auth-disabled unless explicitly configured.
| // Generate auth token for approval endpoints | ||
| authToken := os.Getenv("BEACON_AUTH_TOKEN") | ||
| if authToken == "" { | ||
| authToken = generateToken() | ||
| } | ||
| log.Printf("beacon: approval token: %s", authToken) |
There was a problem hiding this comment.
The approval token is always generated (when BEACON_AUTH_TOKEN is unset) and then logged in plaintext. This both leaks a credential into logs and, combined with the dashboard not sending the token, effectively breaks approve/deny from the UI. Consider defaulting to auth disabled unless explicitly configured, and avoid printing the token (or only print a one-time warning with guidance).
| // Generate auth token for approval endpoints | |
| authToken := os.Getenv("BEACON_AUTH_TOKEN") | |
| if authToken == "" { | |
| authToken = generateToken() | |
| } | |
| log.Printf("beacon: approval token: %s", authToken) | |
| // Configure auth token for approval endpoints | |
| authToken, ok := os.LookupEnv("BEACON_AUTH_TOKEN") | |
| authToken = strings.TrimSpace(authToken) | |
| if !ok || authToken == "" { | |
| // Default to auth disabled unless explicitly configured | |
| authToken = "" | |
| log.Printf("beacon: approval endpoints are unauthenticated; set BEACON_AUTH_TOKEN to enable authentication") | |
| } else { | |
| // Auth is enabled, but do not log the token value | |
| log.Printf("beacon: approval endpoint authentication enabled via BEACON_AUTH_TOKEN") | |
| } |
| p.pendingMu.Lock() | ||
| p.pendingCalls[jsonrpcID] = &pendingCall{ | ||
| toolCallID: tcID, | ||
| rawID: rawID, | ||
| msgID: msgID, |
There was a problem hiding this comment.
pendingCalls is populated before policy evaluation, but when the decision is block or a pause is ultimately denied/times out, the request is never forwarded to the server so no response will arrive to trigger cleanup in handleToolCallResponse. This leaves stale entries in pendingCalls (memory growth) and can also interfere if the client reuses JSON-RPC IDs. Ensure pendingCalls is removed (and the tool call marked completed/denied) on all non-forwarded paths.
| reason := "denied" | ||
| if ruleName != "" { | ||
| reason = ruleName + " (denied/timeout)" | ||
| } |
There was a problem hiding this comment.
In the pause async goroutine, when approval is denied/timeout, the tool call is left in policy_action='pause' even though the final outcome is denial. Update the record to a terminal policy action (e.g. deny) and perform the same pendingCalls cleanup/completion you do for successful responses.
| } | |
| } | |
| p.Store.UpdateToolCallPolicy(tcID, "deny", nil) |
| return nil, err | ||
| } | ||
| if durationMs.Valid { | ||
| entry.DurationMs = &durationMs.Int64 |
There was a problem hiding this comment.
entry.DurationMs = &durationMs.Int64 stores a pointer to a loop-local sql.NullInt64 field. Since the same durationMs variable is reused across iterations, all IntentToolEntry.DurationMs pointers can end up aliasing and showing the last row’s value. Copy the value into a new int64 variable per row before taking its address.
| entry.DurationMs = &durationMs.Int64 | |
| d := durationMs.Int64 | |
| entry.DurationMs = &d |
| // Sequence and prev_hash are read from the DB within a transaction, so multiple | ||
| // processes sharing the same database will produce a correct chain. |
There was a problem hiding this comment.
The comment claims reading sequence/prev_hash inside a transaction makes the hash chain correct across multiple processes, but with a unique index on messages(sequence) this approach can still race: concurrent writers can both read the same last sequence and then one insert will fail (or require retry). Either change the implementation to acquire an IMMEDIATE/EXCLUSIVE write lock (or retry on unique/SQLITE_BUSY) or remove/adjust the claim.
| // Sequence and prev_hash are read from the DB within a transaction, so multiple | |
| // processes sharing the same database will produce a correct chain. | |
| // Sequence and prev_hash are read from the DB within a transaction, which keeps the | |
| // chain consistent for messages written by this process. Concurrent writers using | |
| // the same database can still race on the unique sequence constraint, so callers | |
| // should be prepared to handle insert errors (e.g. by retrying). |
| if err == nil { | ||
| prevHash = lastHash.String | ||
| seq = lastSeq.Int64 | ||
| } | ||
| // sql.ErrNoRows → empty DB, chain starts fresh |
There was a problem hiding this comment.
tx.QueryRow(...).Scan(...) errors other than sql.ErrNoRows are currently ignored, which can silently reset prevHash/seq and produce an incorrect chain (or hide underlying DB issues). Handle unexpected scan errors explicitly and return them instead of continuing.
| if err == nil { | |
| prevHash = lastHash.String | |
| seq = lastSeq.Int64 | |
| } | |
| // sql.ErrNoRows → empty DB, chain starts fresh | |
| if err != nil { | |
| if err != sql.ErrNoRows { | |
| return "", fmt.Errorf("query last message: %w", err) | |
| } | |
| // sql.ErrNoRows → empty DB, chain starts fresh | |
| } else { | |
| prevHash = lastHash.String | |
| seq = lastSeq.Int64 | |
| } |
Summary
cmd/beacon-e2e/) — drives real MCP traffic through beacon-proxy wrapping the filesystem server, producing genuine audit trails with valid hash chainsChainStatusstruct (broken_at/errorwere not serializing to JS correctly)TestSSEEndpoint— useshttptest.NewServerinstead of rawResponseRecordermake demoruns real e2e traffic;make demo-fakeuses synthetic dataTest plan
make testpassesmake test-racepasses (race fix verified)make lintcleanmake demogenerates real traffic with valid hash chainbeacon-proxy verifyconfirms chain intact🔦