Shared memory for your team's agents
Persistent memory that survives across sessions and detects when agents contradict each other.
When one agent discovers something important — a hidden side effect, a failed approach, an undocumented constraint — it commits that fact. Every other agent on your team can query it instantly.
When two agents develop incompatible beliefs, Engram detects the contradiction and surfaces it for review.
Your data is private. All data is encrypted, isolated by workspace, and never read, analyzed, or redistributed. We have a deep commitment to privacy.
macOS / Linux:
curl -fsSL https://engram-us.com/install | shWindows PowerShell:
irm https://engram-us.com/install.ps1 | iexWindows CMD:
curl -fsSL https://engram-us.com/install.cmd -o install.cmd && install.cmd && del install.cmdBy default, the installer writes https://mcp.engram.app/mcp into your MCP config. If your environment needs a different endpoint, set ENGRAM_MCP_URL before running the installer.
Restart your editor, then ask your agent:
"Set up Engram for my team"
Your agent handles the rest.
If you want to run Engram from this repository during development:
pip install -e ".[dev]"
python -m engram.cli serve --httpThen open:
http://127.0.0.1:7474/dashboard
If engram is not on your PATH, python -m engram.cli ... works reliably.
🤖 Agent
Do you have an Invite Key to join an existing workspace, or are you setting up a new one?
👤 You
New
🤖 Agent
✅ Your team workspace is ready.
Share this Invite Key with teammates:
ek_live_abc123...That's all they need — one key, nothing else to configure.
Should commits show who made them, or stay anonymous?
Watch this quick demo to get started with Engram:
(Video coming soon — placeholder added for future update)
This video walks through installation, setup, and basic usage.
🤖 Agent
Do you have an Invite Key to join an existing workspace, or are you setting up a new one?
👤 You
Join — here's my key:
ek_live_abc123...
🤖 Agent
You're in. I'll query team memory before starting work on anything.
That's it. Teammates only need the Invite Key. No database URL, no Team ID, no configuration.
┌──────────────────────────────────────────┐
│ MCP Tools │
│ engram_commit — Write a fact │
│ engram_query — Read team knowledge │
│ engram_conflicts — See disagreements │
│ engram_resolve — Settle conflicts │
├──────────────────────────────────────────┤
│ Conflict Detection │
│ Tier 0: Entity exact-match │
│ Tier 1: NLI cross-encoder (local) │
│ Tier 2: Numeric/temporal rules │
│ Tier 3: LLM escalation (rare) │
├──────────────────────────────────────────┤
│ Hosted Storage │
│ Managed Postgres — zero setup │
│ Isolated per workspace │
└──────────────────────────────────────────┘
No database to provision, no servers to run, no ports to open. Install and go.
Your memory is yours. This isn't a footnote — it's the foundation Engram is built on.
Encrypted. All data is encrypted in transit (TLS) and at rest. Invite keys use encrypted payloads so teammates never see raw credentials.
Isolated. Every workspace is fully isolated. There is no cross-workspace access, no shared tables, no data leakage between teams.
Never read. We don't read your facts. We don't analyze your memory. We don't train on your data. We don't sell it. We have no analytics pipeline that touches your content. Period.
Never redistributed. Your team's knowledge never leaves your workspace. It is never shared with other users, other teams, or third parties. Not now, not ever.
You control it. Delete your workspace and everything is gone. Anonymous mode strips engineer names from all commits. Anonymous agents randomize agent IDs each session. You decide what's visible and what isn't.
| Tool | Purpose |
|---|---|
engram_commit |
Persist a verified discovery |
engram_query |
Pull what your team's agents know |
engram_conflicts |
Surface contradictions |
engram_resolve |
Settle disagreements |
engram_promote |
Graduate ephemeral memory to durable |
engram install # Auto-detect IDEs and configure MCP
engram serve # Start MCP server (stdio mode)
engram serve --http # Start MCP server (HTTP mode)
engram setup # One-command workspace setup
engram status # Show workspace status
engram info # Display detailed workspace info
engram whoami # Show current user identity
engram search <query> # Query workspace from terminal
engram stats # Show workspace statistics
engram config show # Display configuration
engram config set <key> # Update configuration
engram tail # Live stream of workspace commits
engram verify # Verify installation
engram completion <shell> # Install shell tab completionRuns asynchronously in the background:
| Tier | Method | Catches |
|---|---|---|
| 0 | Entity matching | "rate limit is 1000" vs "rate limit is 2000" |
| 1 | NLI cross-encoder | Semantic contradictions |
| 2 | Numeric rules | Different values for same entity |
| 3 | LLM escalation | Ambiguous cases (rare, optional) |
Commits return instantly. Detection completes in the background (~2-10s on CPU).
Engram doesn't just accumulate — it actively forgets what doesn't earn its place.
- Ephemeral memory — Scratchpad facts auto-expire in 24h unless queried twice ("proved useful more than once")
- Importance decay — Unverified inferences expire after 30 days. Unverified observations expire after 90 days.
- Protected facts — Decisions, verified facts, and corroborated claims are never auto-retired.
- Steeper recency curve — A 90-day-old fact scores 0.001 in retrieval. Old context stops crowding out what matters now.
Grounded in the FiFA/MaRS research on forgetting-by-design for cognitive agents.
Engram is grounded in peer-reviewed research on multi-agent memory systems:
- Yu et al. (2026) — Multi-agent memory as a computer architecture problem
- Xu et al. (2025) — A-Mem's Zettelkasten structure for fact enrichment
- Rasmussen et al. (2025) — Graphiti's bitemporal modeling for temporal validity
- Hu et al. (2026) — Survey confirming shared memory as an open frontier
- Alqithami (2025) — FiFA: forgetting-by-design improves agent coherence
Full literature review: docs/LITERATURE.md
Implementation details: docs/IMPLEMENTATION.md
PRs welcome. See CONTRIBUTING.md.