Skip to content

promote: main → prod (retire indexSource interpreter + cleanup)#145

Merged
klappy merged 5 commits intoprodfrom
main
Apr 26, 2026
Merged

promote: main → prod (retire indexSource interpreter + cleanup)#145
klappy merged 5 commits intoprodfrom
main

Conversation

@klappy
Copy link
Copy Markdown
Owner

@klappy klappy commented Apr 26, 2026

Promotion: main → prod

Promotes the following merged-to-main work to production deployment:

Why a separate promotion

Per docs/decisions/D0001-prod-branch-is-production.md, the prod branch is the sole source of production deployments. Merging to main lands code; promoting to prod ships it. I missed that distinction in the prior steps and was watching the live worker waiting for a deploy that wasn't going to happen until this promotion.

Companion canon already merged

klappy/klappy.dev PR #144 (table updates for cache_hits/cache_lookups doubles, retirement of blob9 cache_tier row) merged earlier. Both code and canon land in production together with this promotion.

Post-merge verification

After CF Workers Builds completes:

-- Hit rate over last 30 min — will return real numbers once the new bundle is live
SELECT SUM(cache_hits) AS hits, SUM(cache_lookups) AS lookups,
       SUM(cache_hits) * 1.0 / NULLIF(SUM(cache_lookups), 0) AS hit_rate
FROM oddkit_telemetry
WHERE timestamp > NOW() - INTERVAL '30' MINUTE;

And the trace envelope on any real action will switch from index_source to fetches[] + cacheStats.


Note

Medium Risk
Medium risk because it changes the telemetry/tracing schema (retiring blob9 and adding new doubles) and refactors how storage/cache access is recorded, which can affect observability dashboards and downstream queries.

Overview
Retires the single-value telemetry field cache_tier (blob9) and replaces it with cache effectiveness metrics: cache_hits and cache_lookups (double7/double8), sourced from per-request fetch recording.

Refactors RequestTracer to stop interpreting a “winning” cache tier (indexSource) and instead record per-fetch facts via recordFetch, exposing fetches[] and cacheStats in trace output; index.ts now passes tracer.cacheStats into recordTelemetry after streaming responses complete.

Updates the baseline fetcher instrumentation and integration tests to emit/validate the new fetch-based tracing and updated Analytics Engine schema.

Reviewed by Cursor Bugbot for commit 1635142. Bugbot is set up for automated code reviews on this repo. Configure here.

Klappy (via Claude) and others added 5 commits April 26, 2026 15:52
…, telemetry tallies

The tracer's _indexSource interpreter picked one tier as the 'winner' of a
multi-fetch request and wrote it to telemetry blob9 (cache_tier). It produced
four bugs in four PRs (#137 missing field, #138 streaming race, #139 file:*
recognition, ongoing first-vs-slowest debate). TH-mcp doesn't have these bugs
because it never collapses — it records each fetch with cached: true|false and
reports cacheStats {hits, total} via arithmetic.

This refactor does what TH-mcp does: deletes the interpreter, records per-fetch
facts, lets dashboard queries do any aggregation. Transparency was the goal;
the aggregator was the obscuring layer.

workers/src/tracing.ts:
- Add recordFetch({ url, duration_ms, cached, size?, status? }) — per-fetch records
- Add cacheStats getter returning { hits, misses, total } — derived arithmetic
- Delete _indexSource field, indexSource getter, addSpan setter logic
- toJSON returns { spans, fetches, cacheStats, total_ms } — drops index_source
- Keep addSpan for non-fetcher use (action spans, sha:* spans)

workers/src/zip-baseline-fetcher.ts:
- Convert all 15 cache-layer addSpan call sites to recordFetch
- URL prefix carries the tier: memory:// cf-cache:// r2:// build:// or real https://
- Drop duplicate 'index' cache-tier span (cacheGet now records the cf-cache:// fetch)
- sha:* spans kept as addSpan per spec

workers/src/index.ts:
- Replace cacheTier = tracer.indexSource with stats = tracer.cacheStats
- Pass cacheStats {hits, total} to recordTelemetry
- Preserve PR #138 streaming-race fix (read inside ctx.waitUntil after body consumed)

workers/src/telemetry.ts:
- blob9 retired (slot stays free per 'no deprecation, nobody uses them yet' rule)
- Add double7 cache_hits, double8 cache_lookups
- recordTelemetry signature: cacheStats?: {hits, total} replaces cacheTier?: string
- BASELINE_BLOB_SEMANTIC_NAMES shrunk to 8; BASELINE_DOUBLE_SEMANTIC_NAMES grown to 8

workers/test/telemetry-integration.test.mjs:
- Delete 4 PR #139 file:*-recognition tests + index-wins-over-file test (interpreter gone)
- Rework streaming-race regression test to use cacheStats (semantics unchanged)
- Add tracer.recordFetch arithmetic test (cacheStats mirrors fetches[])
- All recordTelemetry call sites updated to pass {hits, total} instead of string
- 19 tests pass, 0 fail

Canon update for telemetry-governance.md ships in klappy/klappy.dev separately.
Workers Builds reported success on 1a1fd4e (PR #141 merge) at 2026-04-26T19:18:49Z, but the deployed bundle is still serving pre-#141 tracing.ts (envelope still has `index_source` field that was deleted in the refactor; no `fetches[]` array). Suspect stale build cache or ref-tracking issue on the CF integration. This empty commit gives the build a fresh SHA to react to.

If this doesn't resolve within ~5 min, the next step is dashboard-side investigation of Workers Builds settings — not a code fix.

Co-authored-by: Klappy (via Claude) <claude@klappy.dev>
@cloudflare-workers-and-pages
Copy link
Copy Markdown

Deploying with  Cloudflare Workers  Cloudflare Workers

The latest updates on your project. Learn more about integrating Git with Workers.

Status Name Latest Commit Preview URL Updated (UTC)
✅ Deployment successful!
View logs
oddkit 1635142 Commit Preview URL

Branch Preview URL
Apr 26 2026, 09:16 PM

Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

Bugbot Autofix prepared a fix for the issue found in the latest run.

  • ✅ Fixed: New fetchCount getter is never used anywhere
    • Removed the unused fetchCount getter from RequestTracer in workers/src/tracing.ts since it had zero callers in production or tests.
Preview (1635142936)
diff --git a/workers/src/tracing.ts b/workers/src/tracing.ts
--- a/workers/src/tracing.ts
+++ b/workers/src/tracing.ts
@@ -156,9 +156,4 @@
   get spanCount(): number {
     return this.spans.length;
   }
-
-  /** Number of recorded fetches. */
-  get fetchCount(): number {
-    return this.fetches.length;
-  }
 }

You can send follow-ups to the cloud agent here.

Reviewed by Cursor Bugbot for commit 1635142. Configure here.

Comment thread workers/src/tracing.ts
/** Number of recorded fetches. */
get fetchCount(): number {
return this.fetches.length;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New fetchCount getter is never used anywhere

Low Severity

The newly added fetchCount getter on RequestTracer has zero callers — not in production code, not in tests. Its analog spanCount is exercised in the test file (line 671), but fetchCount is never referenced. The test at line 663 uses json.fetches.length instead, bypassing the getter entirely.

Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 1635142. Configure here.

@klappy klappy merged commit 79768ac into prod Apr 26, 2026
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants