-
Notifications
You must be signed in to change notification settings - Fork 0
[Priority 2] Implement incremental context streaming (Context Updater pattern) #62
Copy link
Copy link
Open
Labels
enhancementNew feature or requestNew feature or request
Description
Problem
Current recall uses static snapshots — load all matching entries at once. This doesn't scale well:
- Query with 50+ matches → 50+ subagent dispatches, even if top 5 answer the question
- No support for long-running iterative analyses (drill deeper based on findings)
- No dynamic refresh (swap stale entries for better ones mid-session)
Proposal (from AIGNE paper analysis)
Add ContextStream class that manages dynamic context loading:
from rlm.updater import ContextStream
stream = ContextStream(session_id='abc123')
stream.load_initial(top_entries=5) # Start with highest-scoring
while not analysis_complete:
findings = dispatch_subagents(stream.current_context)
if stream.needs_refresh(findings):
stream.swap(remove=['stale_entry'], add=['deeper_dive_entry'])
stream.log_context_state() # Full provenanceThree modes:
- Static snapshot — one-time load (current behavior)
- Incremental streaming — progressive loading as reasoning unfolds
- Adaptive refresh — replace stale/irrelevant fragments based on model feedback
Implementation
- Create
rlm/updater.pywithContextStreamclass - Integrate with graduated dispatch (use incremental mode automatically when >10 matches)
- Log all context state transitions to provenance log
- Add streaming support to recall pipeline
Impact
- Resource efficiency (only load what's needed)
- Supports long-running iterative analyses
- Better user experience (faster results for queries with clear top-ranked answers)
- Full traceability of context evolution
Effort
3-4 days
Related
- Context Updater from 'Everything is Context' paper
- Graduated dispatch (already implemented)
- Provenance logging (dependency)
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request