Context
Current memory sources (OpenCode, Claude Code, Kiro) are all local agent conversations with clear user/assistant structure. The reflect pipeline extracts judgment patterns from human corrections to AI — the conversation structure gives us signal for free.
Slack is fundamentally different:
- API-based, not local files
- Multi-party, not user/assistant pairs
- Continuous streams, not discrete sessions
- Low signal-to-noise — chatter mixed with judgment
But Slack captures something the agent sources don't: how you think in your own voice when you're not correcting an AI. Design decisions, architectural arguments, pushback on bad ideas — that's in Slack threads, but it's not structured as corrections.
Plan
1. Introduce a Source/Provider interface
The current code hardcodes each source as a function call in Upload() (muse.go:157-176). Before adding Slack, refactor to a proper interface:
type Provider interface {
Name() string
Sessions() ([]Session, error)
}
Upload() iterates over []Provider instead of source-specific if blocks. This is overdue regardless of Slack — we already have 3 sources following the same pattern.
2. Slack source adapter
- Use Slack API (bot token or user token) to pull threads where the user has substantive participation
- Thread = session (threads have enough coherence to be a unit of memory)
- Filter out noise: reactions, one-liners, link drops
- Map thread participants to roles, preserving who said what
- Env var for token:
MUSE_SLACK_TOKEN, user ID: MUSE_SLACK_USER
3. Adapt the dream pipeline for non-agent sources
The current reflect pipeline assumes human→AI correction patterns. Slack needs a different extraction model:
- What you contributed to a thread matters, not the correction pattern
- May need a different reflect prompt or a parallel extraction path keyed on
source
- The learn phase (soul synthesis) can likely stay unified — reflections are reflections regardless of where they came from
This is the hard part and may need experimentation with prompts before it's clear what works.
Open Questions
- Should Slack extraction be a separate pipeline stage, or can we adapt the existing reflect prompts to handle multi-party conversation?
- What's the right unit of memory — threads only, or also DMs and time-windowed channel activity?
- Rate limits and incremental sync — how to avoid re-pulling everything on each
push
- Token scoping — what's the minimal Slack permission set needed?
Context
Current memory sources (OpenCode, Claude Code, Kiro) are all local agent conversations with clear user/assistant structure. The reflect pipeline extracts judgment patterns from human corrections to AI — the conversation structure gives us signal for free.
Slack is fundamentally different:
But Slack captures something the agent sources don't: how you think in your own voice when you're not correcting an AI. Design decisions, architectural arguments, pushback on bad ideas — that's in Slack threads, but it's not structured as corrections.
Plan
1. Introduce a Source/Provider interface
The current code hardcodes each source as a function call in
Upload()(muse.go:157-176). Before adding Slack, refactor to a proper interface:Upload()iterates over[]Providerinstead of source-specificifblocks. This is overdue regardless of Slack — we already have 3 sources following the same pattern.2. Slack source adapter
MUSE_SLACK_TOKEN, user ID:MUSE_SLACK_USER3. Adapt the dream pipeline for non-agent sources
The current reflect pipeline assumes human→AI correction patterns. Slack needs a different extraction model:
sourceThis is the hard part and may need experimentation with prompts before it's clear what works.
Open Questions
push