An adressable hierarchical memory system for LLMs. Compress a 10k-token context into 2k active tokens without losing information — by making the rest adressable on demand.
python nlp benchmark research memory knowledge-graph apache-arrow rag entity-deduplication llm long-context kuzu lancedb tool-calling hierarchical-memory
-
Updated
Apr 10, 2026 - Jupyter Notebook