Skip to content

feat(llm): implement file and redis-backed caching layer#625

Merged
github-actions[bot] merged 3 commits intomainfrom
feature/llm-caching-12002706700728817908
Mar 2, 2026
Merged

feat(llm): implement file and redis-backed caching layer#625
github-actions[bot] merged 3 commits intomainfrom
feature/llm-caching-12002706700728817908

Conversation

@stancsz
Copy link
Owner

@stancsz stancsz commented Mar 2, 2026

Implements an LLM response caching layer to optimize operational efficiency and token costs, utilizing either a local file backend or Redis. Evaluates prompt and conversation history, storing responses based on TTL. Metrics (cache hits, misses, estimated savings) are pushed directly into the Health Monitor. Includes comprehensive documentation and unit/integration tests confirming bypassed paths (YOLO/Streaming).


PR created automatically by Jules for task 12002706700728817908 started by @stancsz

- Adds `LLMCache` interface with `FileCache` and `RedisCache` implementations.
- Updates `LLM.generate` to cache successful results based on prompt/history hashing, skipping streaming and YOLO mode requests.
- Integrates LLM cache hit/miss/savings metrics into the Health Monitor `aggregateCompanyMetrics` function.
- Adds comprehensive unit and integration tests.
- Updates `ROADMAP.md` and provides `docs/LLM_CACHING.md`.

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
@google-labs-jules
Copy link
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

stancsz and others added 2 commits March 2, 2026 02:04
… src/llm.ts

- Adds `LLMCache` interface with `FileCache` and `RedisCache` implementations directly in `src/llm/cache.ts`.
- Updates `LLM.generate` inside `src/llm.ts` to cache successful results based on prompt/history hashing without renaming the file, preventing global import breaks.
- Integrates LLM cache hit/miss/savings metrics correctly into the Health Monitor `aggregateCompanyMetrics` function mapping `m.agent === 'llm'`.
- Adds comprehensive unit and integration tests.
- Updates `ROADMAP.md` and provides `docs/LLM_CACHING.md`.

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
…ENAI_API_KEY

Sets a dummy OPENAI_API_KEY during test setup so that the LLM class doesn't fast-fail before it reaches the mocked AI provider.

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
@github-actions github-actions bot merged commit d6a7417 into main Mar 2, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant