Skip to content

feat(task-runner): implement task output caching#247

Closed
thalesraymond wants to merge 1 commit intomainfrom
feat-task-caching-12645309015252488804
Closed

feat(task-runner): implement task output caching#247
thalesraymond wants to merge 1 commit intomainfrom
feat-task-caching-12645309015252488804

Conversation

@thalesraymond
Copy link
Copy Markdown
Owner

Implement task caching mechanism to avoid redundant executions.


PR created automatically by Jules for task 12645309015252488804 started by @thalesraymond

- Added `ICacheProvider` interface and `MemoryCacheProvider`
- Added `CachingExecutionStrategy` to wrap `IExecutionStrategy`
- Updated `TaskStep` with optional `TaskCacheConfig`
- Updated `TaskRunnerExecutionConfig` with `cacheProvider`
- Maintained 100% test coverage with new unit and integration tests
- Archived specification using the OpenSpec workflow

Co-authored-by: thalesraymond <32554150+thalesraymond@users.noreply.github.com>
@google-labs-jules
Copy link
Copy Markdown
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@sonarqubecloud
Copy link
Copy Markdown

sonarqubecloud bot commented Apr 3, 2026

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 3, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a task caching mechanism to the TaskRunner system, including a new CachingExecutionStrategy, an ICacheProvider interface, and a default MemoryCacheProvider. The changes allow tasks to define cache keys, TTLs, and restoration logic to skip redundant executions. However, the PR accidentally replaces the entire existing specification document instead of appending to it. Additionally, the caching strategy is vulnerable to cache stampedes during concurrent executions, and the memory cache provider lacks an active eviction policy, which could lead to memory leaks in long-running processes.

@@ -1,174 +1,58 @@
# task-runner Specification
## ADDED Requirements
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The diff indicates that the entire existing specification (Purpose, Requirements for Execution, Cancellation, etc.) has been removed and replaced only with the new caching requirements. This appears to be an accidental deletion of critical documentation. Please ensure the new requirements are integrated into the existing specification rather than replacing it.

Comment on lines +26 to +27
const cacheKey = await step.cache.key(context);
const cachedResult = await this.cacheProvider.get(cacheKey);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This implementation is susceptible to a 'cache stampede' (or dog-piling). If multiple independent tasks with the same cache key are executed concurrently, they will all experience a cache miss and trigger the innerStrategy.execute() simultaneously. For expensive tasks, this negates the benefit of caching for the initial parallel runs. Consider implementing a mechanism to track and await pending executions for the same cache key to ensure the work is only performed once.

Comment on lines +8 to +11
private readonly cache = new Map<
string,
{ result: TaskResult; expiresAt?: number }
>();
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The MemoryCacheProvider uses a passive expiration strategy, where expired items are only removed when they are accessed via get(). In a long-running process with many unique cache keys that are never reused, this will lead to a memory leak as the Map grows indefinitely. Consider implementing a periodic cleanup mechanism (e.g., using setInterval) or using a cache with a maximum size and an eviction policy (like LRU) to ensure memory usage remains bounded.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant