Releases: focaxisdev/deja-vu
Deja Vu v0.5.0
Deja Vu v0.5.0
Deja Vu now treats remember and recall as equal parts of the protocol loop.
Earlier releases made cue-first recall cheap and observable. This release closes the loop after work completes: agents now get clearer routing for what to remember, where to write it, when to skip it, and how recall feedback should become future memory maintenance.
Highlights
- Added post-task writeback routing to the README, protocol, workflow, and AGENTS template.
- Added
writeback_hinttodeja-vu-scan-memoryoutput. - Added
deja-vu-feedback-reportfor aggregating recall feedback into maintenance suggestions. - Expanded
deja-vu-lint-memoryto inspect Markdown memory lifecycle records, not only JSONL cues. - Added tests for writeback hints, feedback reporting, Markdown linting, and package binary inclusion.
Why it matters
Recall without disciplined writeback eventually goes stale. Writeback without disciplined recall turns into noise.
Deja Vu v0.5.0 keeps the base product small while making the full loop explicit:
task cue -> familiarity score -> minimal recall -> durable writeback -> feedback-guided maintenance
Upgrade notes
No existing memory layout migration is required.
Recommended additions:
- Use the README writeback routing table after meaningful work.
- Run
deja-vu-lint-memoryto catch Markdown lifecycle issues. - Run
deja-vu-feedback-reportwhenmemory/recall-feedback.jsonlstarts to accumulate actionable recall outcomes.
Deja Vu v0.4.1: Protocol-First Activation
Deja Vu v0.4.1: Protocol-First Activation
Deja Vu remains an ultra-light, repo-local memory protocol for AI coding agents.
v0.4.1 sharpens the adoption path around the smallest useful product: three files, one workflow, and optional helper tooling only when a project grows enough to justify it.
Why It Matters
The core problem is not that agents need another memory platform. The problem is that every new coding-agent chat starts without the project decisions, architecture intent, open loops, and stable preferences that already exist.
Deja Vu keeps the fix small: scan tiny repo-local cues, load the least memory that preserves continuity, and write back only durable context.
Highlights
- README opening now leads with the concrete pain: stop re-explaining the repo to every new coding-agent chat.
- Added a 2-minute start path centered on
AGENTS.md,memory/summary.md, andmemory/impressions.jsonl. - Reaffirmed that the base product is the protocol, not an npm package, service, daemon, vector database, or engine.
- Clarified that scripts, recall feedback, detailed records, and the TypeScript engine are optional scale-up layers.
Upgrade Notes
No protocol migration is required.
Existing v0.4.0 projects can keep using the same minimum files:
AGENTS.mdmemory/summary.mdmemory/impressions.jsonl
Use memory/recall-feedback.jsonl, decision records, open loops, events, and engine helpers only when they reduce repeated explanation or improve recall quality.
Suggested GitHub Release Title
Deja Vu v0.4.1: Protocol-First Activation
Suggested Tagline
Stop re-explaining your repo to every new coding-agent chat.
Deja Vu v0.4.0: Feedback-Aware Memory for AI Coding Agents
Deja Vu v0.4.0: Feedback-Aware Memory for AI Coding Agents Deja Vu stores persistent project memory for AI coding agents in repo-local Markdown and JSONL, without a database, vector store, embedding model, or hosted memory service. v0.4.0 turns recall into an observable feedback loop. Agents can now see why memory was loaded, how much recall budget was spent, and whether the result was helpful, irrelevant, missed, or overloaded. ## Why It Matters AI coding agents lose useful project context between chats. Most memory systems answer by storing and retrieving more text. Deja Vu keeps the first step cheap: scan tiny cues, load only what the task justifies, and write back durable context that belongs in git. This release adds the missing reward signal. A project can now learn which memory routes are worth keeping, which ones are noisy, and which cues missed important context. ## Highlights - Recall budget output for scripted scans and the optional TypeScript engine. - Recall feedback outcomes: helpful, irrelevant, missed, and overloaded. - New memory/recall-feedback.jsonl template and example records. - Stronger memory linting for feedback, weights, statuses, dates, scopes, and record paths. - README and package metadata tuned for AI coding agent memory discovery. - Protocol, workflow, scripted recall, storage, templates, and examples updated to Deja Vu Protocol v0.4. ## Upgrade Notes The minimum protocol still works with: - AGENTS.md - memory/summary.md - memory/impressions.jsonl Add memory/recall-feedback.jsonl when recall quality should tune future memory behavior. Do not log every scan; record feedback only when it changes what the project should remember, ignore, lower in weight, or revise. ## Suggested GitHub Release Title Deja Vu v0.4.0: Feedback-Aware Memory for AI Coding Agents ## Suggested Tagline Repo-local Markdown memory for AI coding agents, now with observable recall budgets and feedback outcomes.
Deja Vu v0.3.1
Deja Vu v0.3.1
Deja Vu v0.3.1 is a recall-quality patch for the cue-first protocol release.
Highlights
deja-vu-lint-memorynow warns when impression cues are too sparse, too large, duplicated, too generic, or repeated across records.- Default engine summaries now preserve gist cues as decision, rationale, and trigger fields when those labels are available.
- Default engine chunking now respects Markdown headings and paragraph boundaries before using hard character splits.
- The patch keeps the v0.3 protocol surface unchanged while making low-token recall routes cleaner and less noisy.
Validation
npm run test:srcnpm run lint:memory
Deja Vu v0.3.0
Deja Vu v0.3.0
Deja Vu v0.3.0 is the cue-first protocol release.
Highlights
- The product definition now centers on cue-first recall:
task cue -> familiarity score -> minimal recall -> durable writeback. - Minimum adoption now requires only project rules,
memory/summary.md, andmemory/impressions.jsonl. - Decision records and open loops are recommended once a project has durable memory worth routing to.
- Event ledgers, context records, and memory indexes are optional scale-up tools, not setup requirements.
- The protocol now defines a recall budget:
- impression scan: always allowed
- summary: at most one file
- detailed records: one to three records
- full memory tree: only when explicitly requested
- Impression records now emphasize short cue routes instead of mini summaries.
Validation
npm run lint:memorynpm test
Deja Vu v0.2.2
Highlights
- Engine
scanImpressions()now performs token-only familiarity scanning. - Query embeddings are deferred until strong recall needs chunk retrieval.
- Added
deja-vu-scan-memoryanddeja-vu-lint-memorypackage binaries. - Scanner CLI now supports
--memory-rootand--file. - Scanner output reports
matchedonly for weak or strong matches. - Added memory linting for impression schema, duplicate ids, and linked record paths.
- Adoption docs, handshake guidance, and example project now treat impression-first recall as the default path.
- Protocol naming is updated to v0.2.
Validation
npm testnpm run lint:memorynpm pack --dry-run --jsonnode scripts/dejavu-scan-memory.mjs --memory-root examples/protocol-project/memory "protocol impression memory"node scripts/dejavu-lint-memory.mjs --memory-root examples/protocol-project/memory
Deja Vu v0.2.0 - Protocol-First Reboot
Deja Vu v0.2.0
Deja Vu v0.2.0 is the protocol-first reboot release.
This release changes the product center from an engine-first TypeScript package to a project memory protocol built on:
- rules
- workflow
- Markdown memory
Highlights
- Added the Deja Vu protocol spec in
docs/protocol.md. - Added the workflow spec in
docs/workflow.md. - Added the canonical Markdown storage contract in
docs/storage-markdown.md. - Added copyable rules and memory templates in
docs/templates/. - Added
examples/protocol-project/as the repo-first adoption example. - Reframed the npm package as an optional semantic recall engine layer.
- Kept the existing TypeScript engine API intact.
- Updated the source-test script for compatibility with the current Node.js v24 environment.
Recommended adoption path
Start with:
README.mddocs/protocol.mddocs/workflow.mddocs/storage-markdown.mddocs/templates/
Use the npm package only if the host later needs semantic recall and threshold-gated loading.
Validation
npm run buildnpm run test:srcnpm test
Deja Vu v0.1.0
Deja Vu v0.1.0
Deja Vu is a familiarity-first memory engine for AI agents.
This first public release is aimed at developers who want to try a staged memory model instead of always-on retrieval:
- familiarity detection before context loading
- threshold-gated summary access
- chunk retrieval only when the match is strong enough
- plugin-first architecture for embeddings, storage, and scoring
- in-memory adapters for quick local evaluation
Install
npm install deja-vuTry It In 3 Minutes
import { createInMemorySemanticRecallEngine } from "deja-vu";
const engine = createInMemorySemanticRecallEngine();
await engine.addMemory({
title: "Launch strategy",
content: "Use familiarity-first recall before loading long project history.",
tags: ["launch", "memory"],
});
const result = await engine.recall({
text: "This sounds like the launch plan for the memory engine.",
loadChunks: true,
});
console.log({
matched: result.matched,
familiarityLevel: result.familiarityLevel,
score: result.score,
});Included In v0.1.0
SemanticRecallEnginepublic API- hybrid scoring with semantic, recency, and importance signals
- summary and chunk gating thresholds
- in-memory storage and vector stores
- mock embedding provider for deterministic local demos
- examples and integration documentation
Current Scope
Deja Vu is a memory core, not a full hosted memory platform. This release is best suited for:
- coding agents
- project memory
- long-running task assistants
- host runtimes that want an embeddable memory module
Current Limitations
- the default adapters are in-memory only
- production use still needs persistent storage and a real embedding provider
- the bundled mock embedding provider is for demos, not semantic accuracy at scale