Memory traces for AI agents
Self-improving memory system with quality control, drift detection, and learning framework. Built for AI agents that need to learn, remember, and self-correct - with full privacy and zero API costs.
π€ For AI Agents: See Agent Integration for quick integration guide
π Documentation:
- Memory Guide - How to use memory efficiently β
- Quick Reference - Fast lookup for agents
- Agent Integration - API reference
The Problem: Current AI memory systems accumulate everything without filtering. Over time, they fill with noise, lose coherence, and can't distinguish valuable insights from garbage.
What goes wrong:
- β No quality control - Garbage in = garbage retained forever
- β No drift detection - System doesn't know when it's losing coherence
- β No consolidation - Can't store everything, but also can't decide what to keep
- β Result: Memory systems that get worse over time, not better
Engram's Solution:
β
Quality gates - Only store verified, high-quality learnings
β
Drift detection - Monitors when memory patterns diverge from goals
β
Dream consolidation - Like human sleep, replays and strengthens valuable memories while filtering noise
β
Self-correction - Automatically adjusts when detecting issues
Result: Memory that improves over time, not degrades.
π Privacy-First - Local embeddings, no data leaves your machine
π§ Smart Memory - Semantic search with quality filtering
π― Intent-Aware Retrieval - Auto-adjusts search based on query intent (v0.13)
π§ Reasoning Memory - Decision traces, skill extraction, learn from success/failure (v0.14)
π Self-Improving - Quality evaluation and drift detection
π§ͺ Neuroscience-Inspired - Dream consolidation, homeostatic regulation
π Active Learning - Tracks knowledge gaps, suggests what to learn (v0.11)
π Episodic Memory - Store experiences, not just facts (v0.11)
π Framework-Agnostic - Works with any LLM
π¦ All-in-One - Single Docker container, FastAPI server
docker run -d -p 8765:8765 -v ./memories:/data/memories ghcr.io/compemperor/engram:latestgit clone https://github.com/compemperor/engram.git
cd engram
pip install -r requirements.txt
python -m engramAPI: http://localhost:8765
Docs: http://localhost:8765/docs (interactive OpenAPI)
curl -X POST http://localhost:8765/memory/add \
-H "Content-Type: application/json" \
-d '{"topic":"coding","lesson":"Always validate inputs","source_quality":9}'curl -X POST http://localhost:8765/memory/search \
-H "Content-Type: application/json" \
-d '{"query":"best practices","top_k":5}'import requests
api = "http://localhost:8765"
# Store
requests.post(f"{api}/memory/add", json={
"topic": "debugging",
"lesson": "Check logs first",
"source_quality": 9
})
# Search
r = requests.post(f"{api}/memory/search", json={
"query": "debugging",
"top_k": 5
})
print(r.json()["results"])Three layers working together:
- Memory - Local embeddings (E5) + FAISS vector search
- Mirror - Quality evaluation, drift detection, consolidation
- Learning - Structured sessions with self-verification
- AI Agents - Persistent memory with quality control
- Personal Knowledge - Build your external brain
- Research - Track learnings with drift detection
- Trading Bots - Remember lessons, prevent mistakes
- Chatbots - Maintain context across sessions
- docs/AGENT_INTEGRATION.md - Quick guide for AI agents
- docs/QUICKSTART.md - 5-minute tutorial
- docs/MEMORY-GUIDE.md - How to use memory efficiently
- /docs - Interactive API docs (when running)
- Python 3.11 - Core runtime
- FastAPI - REST API with OpenAPI docs
- E5-base-v2 - Local embeddings (768-dim, high-quality semantic search)
- FAISS - Vector similarity search
- Docker - Single container deployment
Memory & Forgetting
- Ebbinghaus Forgetting Curve - Exponential memory decay model
- FadeMem (2026) - Adaptive memory fading for LLMs (45% storage reduction)
- Google Titans - Adaptive weight decay with surprise metrics
Learning & Recall
- Spaced Repetition - Optimal review scheduling
- Spreading Activation Theory - Knowledge graph auto-linking
Retrieval
- SimpleMem (2026) - Intent-aware retrieval planning, 26% F1 improvement
Reasoning & Skills
- ReasoningBank - Distill traces into reusable strategies
- ExpeL - Experiential learning without fine-tuning
- Voyager - Skill library concept
- Generative Agents - Memory + reflection architecture
Embeddings
- E5-base-v2 - High-quality text embeddings
Projects
- Butterfly RSI - Drift detection & dream consolidation
- ICLR 2026 Workshop - Recursive self-improvement research
- Alternative vector DBs (Milvus, Qdrant)
- Reasoning Memory β NEW (v0.14) - Decision traces, distillation, skill extraction
- Intent-Aware Retrieval (v0.13) - Auto-adjusts search params based on query intent
- Memory Compression & Replay - Consolidate similar memories, strengthen via replay
- Heuristic Quality Assessment - Auto-runs during sleep cycle - Auto-evaluate memory quality without LLM (usage patterns)
- Reflection Phase - Synthesize memories into higher-level insights (inspired by Generative Agents)
- Auto-Reflection - Sleep scheduler automatically reflects on topics every 24h
- Memory Fading - Biologically-inspired forgetting with sleep scheduler
- E5-base-v2 Embeddings - High-quality semantic search
- Temporal Weighting - Boost recent + high-quality memories
- Context Expansion - Auto-expand related memories via knowledge graph
- Spaced Repetition - Memory review scheduling
- Episodic/Semantic Memory - Separate experiences from facts
- Knowledge Graphs - Memory relationships and auto-linking
- Learning Sessions - Structured learning with quality filtering
See GitHub Releases for detailed changelog.
Apache 2.0