Skip to content

perf(client): cache parsed chunk/batch origin coordinates#31

Open
RZDESIGN wants to merge 1 commit intohytopiagg:mainfrom
RZDESIGN:perf/cache-chunk-batch-origin-parsing
Open

perf(client): cache parsed chunk/batch origin coordinates#31
RZDESIGN wants to merge 1 commit intohytopiagg:mainfrom
RZDESIGN:perf/cache-chunk-batch-origin-parsing

Conversation

@RZDESIGN
Copy link
Copy Markdown

@RZDESIGN RZDESIGN commented Mar 5, 2026

Summary

Chunk.chunkIdToOriginCoordinate() and Chunk.batchIdToBatchOrigin() convert string IDs like "32,0,-64" into { x, y, z } objects. They're called in hot per-frame loops:

Caller Frequency
ChunkMeshManager.applyBatchViewDistance() Every batch, every frame
ChunkManager batch sorting Every batch rebuild
LightLevelManager / SkyDistanceVolumeManager On volume events
ChunkWorker On every chunk build
Chunk.getChunkIdsInBatch() / chunkIdToBatchId() Various

Each uncached call performs:

  1. split(',') → allocates an Array<string>
  2. .map(Number) → allocates a closure + a new Array<number>
  3. { x, y, z } → allocates a result object

That's 5 allocations per call. With ~200 active batches, applyBatchViewDistance() alone creates ~1,000 throwaway objects per frame — pure GC pressure for a deterministic mapping that never changes.

Fix

Add a module-level Map<string, Vector3Like> cache for each method. The first call for a given ID parses and caches; all subsequent calls return the cached object. Since the mapping from string ID to coordinates is deterministic and all callers are read-only, this is completely safe.

Changes

File Change
client/src/chunks/Chunk.ts Added chunkOriginCache and batchOriginCache maps; wrapped both methods with cache-first lookup

Risk Assessment

  • Correctness: The mapping "x,y,z" → { x, y, z } is pure and deterministic — caching it cannot produce incorrect results.
  • Mutation safety: All callers read .x, .y, .z without mutation. If a future caller were to mutate the returned object, it would corrupt the cache — but this would also be a bug in the caller, not in the cache.
  • Memory: The cache grows with unique chunk/batch IDs. A large world with 10,000 chunks would store ~10,000 entries (trivial — each entry is a string key + 3-number object, roughly 100 bytes).
  • No eviction needed: Chunk/batch IDs are stable for the lifetime of the world. The caches are module-scoped and reset naturally on page reload.

Test Plan

  • Load a large world and move around — verify chunks render and cull correctly (view distance, frustum)
  • Check that light level volumes still work near light-emitting blocks
  • Monitor GC pauses / minor GC frequency — expect a measurable reduction in worlds with many batches

chunkIdToOriginCoordinate() and batchIdToBatchOrigin() are called in
hot per-frame loops (view distance culling, batch sorting, light
volumes).  Each call does split(',') + .map(Number) which allocates
an intermediate array, a closure, a mapped array, and an {x,y,z}
object — 5 allocations per call.

With ~200 active batches, applyBatchViewDistance() alone produces
~1 000 throwaway objects per frame.  Add a Map<string, Vector3Like>
cache for each method so the parsing and allocation happen only once
per unique ID.  All existing callers are read-only so caching the
returned objects is safe.

Made-with: Cursor
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant