____ __ __ ____ ____
/ __ \____ ___ ____ ___ / /_/ /_ / __ \/ __ )
/ / / / __ `__ \/ __ \/ _ \/ __/ __ \/ / / / __ |
/ /_/ / / / / / / / / / __/ /_/ / / / /_/ / /_/ /
\____/_/ /_/ /_/_/ /_/\___/\__/_/ /_/_____/_____/
OmnethDB is an embedded, versioned memory database for autonomous agents.
It is not a vague "AI memory layer", not a flat vector store, and not a chat-history cache. It is a serious memory primitive for project knowledge: explicit lineage, typed relations, governed writes, auditable history, and retrieval that prefers current truth over stale-but-similar text.
LLMs forget everything between runs. Real work does not.
Agents working on a codebase keep rediscovering the same facts:
- why a weird config is intentional
- which architecture rule is non-negotiable
- what incident already happened before
- which old statement is now obsolete
Most "memory" systems store all of that in one blob and hope retrieval sorts it out later. That creates a dangerous failure mode: the agent remembers the wrong thing with high confidence.
OmnethDB exists to solve that semantic problem, not just the storage problem.
Updatesis a real state transition, not a loose tag. When one memory supersedes another, retrieval sees the new truth.Static,Episodic, andDerivedmean different things and are governed differently.Forgetdoes not delete history. It records lifecycle explicitly.- relations are typed and auditable
- write policy is explicit
- trust is policy-driven
- retrieval and inspection are separate jobs
If you want the full contract, read docs/ARCHITECTURE.md.
If you are new to the repo:
- Read docs/GETTING_STARTED.md
- Read docs/CONCEPTS.md
- Use docs/SETUP.md to configure a real workspace
- Use docs/RELEASING.md if you want to ship binaries and publish releases
- Use docs/INDEX.md when you need the full planning and architecture stack
Install the published binaries and put these executables on your PATH:
omnethdbomnethdb-mcp
Quick install:
curl -fsSL https://raw.githubusercontent.com/ubcent/omnethdb/main/scripts/install.sh | shInstall a specific version:
curl -fsSL https://raw.githubusercontent.com/ubcent/omnethdb/main/scripts/install.sh | VERSION=v0.1.0 shThe script installs to ~/.local/bin by default. Override with INSTALL_DIR=/your/bin/dir.
See the full walkthrough in docs/GETTING_STARTED.md.
Ask the CLI what it can do:
omnethdb helpCreate a workspace config:
[spaces."repo:company/app"]
default_weight = 1.0
half_life_days = 30
max_static_memories = 500
max_episodic_memories = 10000
profile_max_static = 20
profile_max_episodic = 10
[spaces."repo:company/app".embedder]
model_id = "builtin/hash-embedder-v1"
dimensions = 256Bootstrap the space:
omnethdb init \
--workspace . \
--space repo:company/appWrite a stable fact:
omnethdb remember \
--workspace . \
--space repo:company/app \
--kind static \
--actor-id user:alice \
--actor-kind human \
--content "payments use cursor pagination"Recall current live knowledge:
omnethdb recall \
--workspace . \
--spaces repo:company/app \
--query pagination \
--top-k 5The normal loop is simple:
profileorrecallbefore worklineagebefore updating an existing factremember --update ...when reality changedrelatedwhen you need to inspect explicit graph linksauditwhen you need the change trail
If you are using OmnethDB as a real operating memory for agents, this is the habit to build:
- read before writing
- update instead of duplicating
- write only durable facts or meaningful incidents
- derive only from multiple current sources with a rationale
Main commands:
init: bootstrap a spaceremember: write a memorylint-remember: preview duplicate/update warnings before writingrecall: query live memoriesprofile: build a layered memory profileforget: forget a memory without deleting historyrevive: revive an inactive lineagelineage: inspect version historyrelated: traverse explicit relationscandidates: raw candidate search for curation and authoringquality,quality-plan,quality-report: inspect memory quality and cleanup opportunitiessynthesis-candidates,promotion-suggestions: curator-facing advisory review flowsaudit: inspect audit historyexport: render snapshot, markdown summary, or Mermaid graphmigrate: migrate a space to a new embedderspace,space validate-config,space diff-config,space apply-config: inspect and reconcile persisted configconfig: print workspace layout and loaded runtime configserve: run the HTTP APIserve-grpc: run the gRPC API
The root package omnethdb is the supported public facade.
The main operator entrypoint is cmd/omnethdb.
OmnethDB ships with a local stdio MCP server:
omnethdb-mcp --workspace .A Claude Code starter pack lives in examples/claude-code/README.md.
Run:
omnethdb serve --workspace . --addr :8080Run:
omnethdb serve-grpc --workspace . --addr :9090Proto contract:
cmd/omnethdb/: CLIcmd/omnethdb-mcp/: MCP serverinternal/memory/: domain model and validationinternal/policy/: governance and trust resolutioninternal/store/bolt/: bbolt-backed storage and transactional semanticsinternal/httpapi/: HTTP transportinternal/grpcapi/: gRPC transportinternal/mcp/: MCP tool surfaceembedders/hash/: built-in deterministic embedderexamples/: runnable examples and integration samplesdocs/: architecture, planning stack, onboarding, and operator docs
The full planning stack lives in docs/INDEX.md.
Most important source-of-truth docs:
- docs/ARCHITECTURE.md
- docs/SPEC_MAP_V1.md
- docs/CAPABILITY_MAP_V1.md
- docs/UAT_MAP_V1.md
- docs/BACKLOG_V1.md
- docs/MILESTONE_PLAN_V1.md
- docs/RELEASING.md
The bar in this repo is intentionally high:
- semantic correctness over fuzzy convenience
- explicit behavior over hidden magic
- inspectable state transitions over "it probably works"
- operator simplicity without product compromise
- docs that help the next person move faster instead of reverse-engineering intent
That standard applies to the documentation too.