Releases: rolandpg/zettelforge
v2.6.2 — config editor UI/UX fix
UI/UX release. Fixes the /config page so the Apply button actually works and surfaces enum-style settings as dropdowns instead of free-text inputs. No data migration. No config changes. No API contract changes.
Fixed
/config"Save Changes" button is no longer dead. The Quick Settings panel calledsaveConfigForm()andreloadConfig()— neither function was defined anywhere, so the button silently no-op'd and the panel rendered "Loading schema..." forever. Replaced with a real form-based editor whose Apply button PUTs a nested payload to/api/configand reloads from server on success.- YAML scalar emitter now escapes backslashes correctly. CodeQL #35/#36:
replace(/"/g, '\\"')left literal\untouched, so a value likefoo\"barwould have prematurely terminated the scalar. Now escapes\first, then". coerce()no longer overwrites numeric settings with 0 on empty input. Clearing a number field now reverts the change rather than queuing0, which was the previous (silent, surprising) behavior.- YAML parser correctly handles list values (e.g.
synthesis.tier_filter) when the YAML uses the indentedkey:\n - itemform. Stack frames now carryparentObj/parentKeyso the first list child can convert the optimistically-created{}into[]and append correctly.
Added
- Form-based config editor with dropdowns.
/confignow renders a grouped settings form alongside the YAML editor. Known enum fields (backend,embedding.provider,llm.provider,llm.local_backend,logging.level,synthesis.default_format,governance.pii.action) render as<select>controls. Restart-required leaves get a "restart required" badge sourced from the same set the server uses, so the UI warning is never out of sync with the server's classification. GET /api/config/metaendpoint. Single source of truth for the UI's "restart required" badge — eliminates server/UI drift on_RESTART_REQUIRED_FIELDS. Both UIs fetch on load with hardcoded fallback for offline-server safety.- WAI-ARIA Tabs pattern completed on
/config:aria-selected,aria-controls,role=tabpanel,aria-labelledby, rovingtabindex, and arrow-key navigation (Left/Right/Home/End). - Pending-changes counter and Revert button. The form tracks dirty fields by dotted path, builds a single nested payload on Apply, and shows
N pending change(s) (M need restart)next to the buttons. - YAML editor accepts both YAML and JSON (was JSON-only despite the label) and skips redacted
***secrets so they aren't PUT back as literal strings.
Tests
31 passed, 2 skipped (was 24 + 2 in 2.6.1). New coverage: enum round-trip for restart-required + live fields, multi-section nested payload from a single Apply, /api/config/meta shape and auth, list-value PUT round-trip, and an HTML regression guard proving the dead saveConfigForm/reloadConfig handlers are gone.
Install: pip install --upgrade zettelforge==2.6.2
Full changelog: https://github.com/rolandpg/zettelforge/blob/master/CHANGELOG.md
v2.6.1 — RFC-015 web GUI hotfix
Hotfix release. Resolves three blockers found in code review of the
RFC-015 web GUI shipped in v2.6.0. No data migration. No config changes.
Fixed
/configHTML page now renders._to_dictwas defined as a closure
insideget_config_endpoint, so every render of/configraised
NameError, was silently swallowed by a bareexcept, and left the
YAML body blank on initial server-side render. Promoted to a module-level
_config_to_dicthelper used by both routes. (PR #131)PUT /api/configcorrectly reports nested restart-required fields.
The check compared top-level payload keys against a set of dotted-path
fields, so payloads like{"embedding": {"provider": "x"}}were
reported asapplied: ["embedding"],pending_restart: [], telling
operators a restart-required change had taken effect when it had not.
Added_flatten_keysto walk nested payloads to dotted leaf paths;
appliedandpending_restartnow contain accurate dotted paths.
(PR #131)/configHTML route is now auth-gated./api/configwas protected,
but the HTML shell (and once the_to_dictbug was fixed, its
server-rendered YAML body) was reachable without an API key. Added
Depends(require_api_guard)and made the YAML body redact secrets
before serialization. (PR #131)
Tests
- Added four regression tests in
tests/test_web_api.pycovering all
three fixes. 24 passed, 2 skipped (was 20 + 2).
Upgrade
pip install -U 'zettelforge==2.6.1'
No config or data migration required.
v2.5.2
[2.5.2] - 2026-04-25
Hotfix release. Restores end-to-end functionality of synthesis, causal
triple extraction, fact extraction, LLM NER, and neighbor evolution
under any reasoning-style LLM (qwen3.5+, qwen3.6, nemotron-3, etc.).
Fixed
-
Reasoning-model token starvation across every LLM call site.
Reasoning models emit hidden<think>...</think>tokens that count
againstnum_predictbut never appear in the finalresponsefield
Ollama returns. Pre-2.5.2 token caps (max_tokens=300/400/800/
1024) were exhausted entirely by the thinking phase on these
models, leaving the JSON answer empty. Symptoms: synthesis fell back
to"No specific answer found for: …"on every query; causal triple
extraction persisted 0 edges despite rich CTI text; LLM NER
silently no-opped; neighbor evolutionparse_failed{schema=..., raw=""}warnings flooded the log.Bumped every
generate(..., max_tokens=...)call site to give
reasoning models room to think and emit a final answer. Affected
files:File Old cap New cap note_constructor.py(causal triples)300 8000 synthesis_generator.py800 2500 fact_extractor.py400 2500 entity_indexer.py(NER)300 2500 memory_evolver.py(2 sites)1024 2500 Causal extraction needs the largest budget because the prompt asks
the model to enumerate every causal relation in a passage; this
triggers the longest reasoning chains anywhere in the system.
Empirical againstqwen3.5:9b: at 4000 tokens the call was
stochastically sufficient (eval_count varied 2.8k–4k+, ~70%
success), so 8000 is the conservative cap that keeps the success
rate above 95% on the same model. Other call sites converge with
less reasoning overhead so 2500 suffices. -
LLM client timeout bumped 60s → 180s.
LLMConfig.timeoutand
OllamaProviderconstructor default were both 60 seconds — well
below the 60–120s wall-clock time of a 4000–8000 token reasoning
generation on a 9B-Q4_K_M model.ReadTimeoutwas firing during
causal extraction even when the model would have returned valid
JSON given another 30 seconds. Bumped both defaults plus
config.default.yamlto 180s.Verified end-to-end on
qwen3.5:9b:- Synthesis: query "What CVE does DROPBEAR exploit?" returns
"CVE-2024-3094"with 1 source citation (was returning
"No specific answer found for: …"on every call pre-2.5.2). - Causal extraction: corpus seeded with APT28/DROPBEAR/CVE-2024-3094
text yields a 4-triple JSON array in 137s wall time:
APT28 → targets → manufacturing sector,
APT28 → uses → DROPBEAR,
DROPBEAR → exploits → CVE-2024-3094,
APT28 → attributed_to → Russian GRU Unit 26165.
- Synthesis: query "What CVE does DROPBEAR exploit?" returns
Operational note
Slow models. With 8000 tokens of reasoning budget, single causal
extraction calls now take 60–140s on a 9B model. remember(sync=True)
in this configuration will block 1–3 minutes per note. The default
async path (background enrichment queue) is the preferred mode.
Operators on faster hardware or smaller models can lower the caps via
config/env if needed, but the v2.5.2 defaults trade latency for
end-to-end correctness on the reference model.
Notes
This explains the evolution_parse_failed and causal_triples parse_failed cascades documented in the v2.4.x Vigil incident. The
v2.4.2 PR #95 Tier 1/2 LLM observability surfaced the empty responses
but the root-cause attribution to token-cap-vs-thinking-budget waited
until the v2.5.1 perf-bench run made the failure reproducible end-to-end.
v2.5.1
[2.5.1] - 2026-04-25
Hotfix release. Surfaced during the v2.5.0 perf benchmark run.
Fixed
KnowledgeGraph._cache_edgecrashed on legacy-schema edges.
Long-running deployments accumulatedkg_edges.jsonlentries written
by a now-removed pre-v2.5.x writer that used
{source_id, target_id, relation_type}instead of the canonical
{from_node_id, to_node_id, relationship}keys. The loader hard-failed
withKeyError: 'from_node_id'on the first such row, taking down
everyrecall()andsynthesize()that touches the KG. Affects any
workspace with mixed-schema edge history; observed locally with 189k
edges where ~80k were the legacy shape.
_normalize_edge_schema()now remaps legacy keys to canonical on load
and silently drops entries that are still un-normalizable, with a
count logged at WARNING so operators can see the skip volume.
Six new regression tests intests/test_kg_edge_schema.pycover
pass-through, remap, missing-fields, non-dict, mixed-batch, and
corrupt-JSON cases. The previously-broken environment-dependent
test_basic.py::test_ingest_relationshipnow passes deterministically.
v2.5.0
[2.5.0] - 2026-04-25
Compliance-driven minor release. Closes every CRITICAL and HIGH audit
finding except H-3 (mypy strict) and the ANN slice of H-1, both of
which need per-module ratchet plans. Also adds two new optional LLM
backends, a Presidio PII detector, and supply-chain hardening.
Added
- RFC-011 — Local LLM backend selection (#104). New
local_backend
config knob picks betweenllama-cpp-python(GGUF) and
onnxruntime-genai(ONNX) at runtime. Both ship as optional extras
(pip install zettelforge[local]or[local-onnx]). - RFC-012 — LiteLLM unified provider (#108). Routes to 100+
upstream LLM providers via the LiteLLM SDK. Optional extra
(pip install zettelforge[litellm]); the base package never imports
it unless the SDK is present. - RFC-013 — Microsoft Presidio PII detection (#118). Optional PII
validator with three policies (log/redact/block),
configurable viagovernance.pii.*. CTI allowlist excludes
IP_ADDRESS/URL/DOMAIN_NAMEfrom detection so legitimate
threat-intel indicators flow through unmodified. Soft dependency —
pip install zettelforge[pii]to activate; the base package never
importspresidio_analyzerunless the SDK is present. - GOV-009 Snyk SCA + SAST declared in
controls.yaml(#114). The
spec-drift validator now walks every.github/workflows/*.ymlso
controls whose CI step lives outsideci.yml(Snyk's separate
workflow) can be honestly declared. - GOV-006 solo-maintainer compensating controls (#117). New
controls.yamlentry pins the existing CI gates (lint, tests,
governance spec-drift) as compensating controls for the GOV-006
two-person review rule that cannot be physically satisfied with one
human maintainer. CODEOWNERS updated with explanatory comment. SECURITY.md+ CODEOWNERS added to the repo root for vulnerability
disclosure and review attribution.
Changed
- All GitHub Actions are now SHA-pinned (audit H-5 hardening). Every
uses: org/repo@vXreference replaced withuses: org/repo@<full-sha> # vX.Y.Z
to prevent supply-chain attacks via tag rewrites. - Ruff rule set ratcheted to GOV-003 §"Tooling and Automation" minus
ANN (#106 + #107 + #109 + #111 + #113). Activeselectlist:
{E, F, I, W, N, T20, B, UP, SIM, RUF, S}. Per-line# noqa: SXXX
annotations document each accepted exception (best-effort fallbacks,
non-crypto RNG,?-bound SQL with constant column lists).
RUF002/RUF003ignored globally for stylistic en-dash and ×. - CI install-step shell precedence fixed (#112). The
pip install -e ".[dev]" || pip install -e "." && pip install pytest...
chain parsed as(A || B) && C, so the pytest install ran on
every success path including when[dev]already provided pytest.
Wrapped the fallback in parentheses. - CONTRIBUTING.md accuracy (#115). Documents
ruff format
(project hasn't used black for a while) and lists what CI actually
enforces so new contributors have a green-build target.
Compliance audit closure (tasks/compliance-audit-2026-04-25.md)
| Severity | Finding | Status |
|---|---|---|
| CRITICAL | C-1 branch protection | CLOSED (with required status checks) |
| CRITICAL | C-2 fabricated no_hardcoded_secrets claim |
CLOSED (#100) |
| HIGH | H-1 ruff full select per GOV-003 | CLOSED for {E,F,I,W,N,T20,B,UP,SIM,RUF,S}; ANN ratcheting per-module |
| HIGH | H-2 coverage threshold not enforced | CLOSED (#100) |
| HIGH | H-4 GOV-006 / CODEOWNERS solo-maintainer | CLOSED on the zettelforge side (#117); GOV-006 doc amendment in rolandpg/governance repo is separate scope |
| HIGH | H-5 SCA gate + SHA-pinned actions | CLOSED (#102 + #114 + SHA-pin commit) |
| MEDIUM | M-1 bare except: in production |
CLOSED (#100) |
| MEDIUM | M-3 OCSF timezone_offset field |
CLOSED (#100) |
| LOW | L-4 CI install-step shell precedence | CLOSED (#112) |
Outstanding: H-3 (mypy --strict in CI; needs per-module ratchet plan
for 393 errors across 38 files), M-2 (rewrite GOV-016 to match the
YAML-frontmatter practice already in use), M-4 (lock file), H-1 ANN
ratchet (121 findings across 38 files).
v2.4.3 — OCSF version self-correct + log-level env var + fastembed preload
Patch release preparing v2.4.3 for Growth Week 2 launch. Three instrumentation and developer-experience improvements.
Highlights
- feat: OCSF version self-correct (#96) —
ocsf_versionfield now dynamically resolved from the installedocsf-schemapackage at import time rather than hardcoded. Eliminates version skew after OCSF package updates. - feat:
ZETTELFORGE_LOG_LEVELenv var (#96) —structlogthreshold respects the environment variable, consistent with otherZETTELFORGE_*overrides. - feat: Fastembed preload (#96) — Embedding model loaded eagerly on first
MemoryManagerinit rather than on first recall, reducingremember()latency on cold-start queries.
Upgrade action (Vigil)
Bump Vigil's ZettelForge pin to 2.4.3 and restart. The ZETTELFORGE_LOG_LEVEL env var lets you silence DEBUG noise in production without code changes.
See CHANGELOG.md for the full set of changes.
v2.4.2 — RFC-010 hotfix + RFC-009 Phase 0.5
Patch release bundling the RFC-010 enrichment-pipeline hotfix with the RFC-009 Phase 0.5 latency-attribution instrumentation and preliminary attribution artifact. Full response to the 2026-04-24 Vigil telemetry audit.
Highlights
- fix(enrichment): RFC-010 —
OllamaProvidertimeout plumbing + consolidation shutdown race guard (#88) - feat(telemetry): RFC-009 Phase 0.5 — per-phase timers in
remember()viaphase_timings_ms(#90) - docs: Phase 0.5 preliminary attribution — 98.4% of
remember()wall-clock is LanceDBnotes_ctiwrites; 7,356 uncompacted fragments identified (#91)
Honest scope note
This release does NOT yet address:
- The ~2,329 daily enrichment-job drops — those are driven by HTTP 200 + empty Ollama responses, not by hangs. Fix ships in RFC-009 Phases 1–3 (v2.5.0: durable outbox + circuit breaker).
- LanceDB fragment accumulation — identified here but not fixed here. RFC-009 is being revised to add periodic compaction to Phase 1 scope.
Upgrade action (Vigil)
Bump Vigil's ZettelForge pin to 2.4.2, restart, run ~1h of representative CTI traffic. The new phase_timings_ms will be emitted inside ocsf_api_activity events in zettelforge.log — this refines or falsifies the preliminary Phase 0.5 attribution in docs/superpowers/research/2026-04-24-phase-0.5-attribution-prelim.md.
See CHANGELOG.md for the full set of changes.
v2.4.1
Operational telemetry (RFC-007), SQLite backend fixes, and TypeDB authentication hardening.
Added
-
Operational telemetry (RFC-007) (#85) — new per-query metrics stream at
~/.amem/telemetry/telemetry_YYYY-MM-DD.jsonlwhenZETTELFORGE_LOG_LEVEL=DEBUG. Five shipped components:TelemetryCollectorclass withstart_query/log_recall/log_synthesis/log_feedback/auto_feedback_from_synthesis. INFO/DEBUG-gated field sets, thread-safe JSONL append, 1-hour TTL on in-memory query context.MemoryManagerintegration —recall()andsynthesize()gain a non-breakingactor=kwarg. OCSF events extended via the sanctionedunmappedobject withzf_prefix (class_uid 6002 compliant). Narrow-scopeperf_counterdeltas capturevector_latency_msandgraph_latency_ms.- Daily aggregator (
python -m zettelforge.scripts.telemetry_aggregator) emitting aDailyMetricsJSON report (latency averages, tier distribution, unused-notes count, top-utility notes). - Human evaluation workflow — 6-question rubric (
docs/human-evaluation-rubric.md), sampler (python -m zettelforge.scripts.human_eval_sampler) that selects 20 random briefings as a fill-in Markdown template, and a--write-eventspath to appendevent_type: "human_eval"entries back to the telemetry log. - Optional Streamlit dashboard (
streamlit run src/zettelforge/scripts/telemetry_dashboard.py) — query volume, latency p50/p95/max, tier distribution, utility trend, unused-notes warning.
Privacy contract: raw note content never persisted (IDs / tiers / source_types / domains only); query text truncated at 200 chars INFO / 500 chars DEBUG; local-only, no network calls.
Fixed
- SQLite shutdown NPE (#84, closes the H3 finding from issue #83) —
close()andinitialize()are now lock-protected and idempotent. Readers and writers raise a cleanBackendClosedError(new, instorage_backend) instead of the opaqueAttributeError: 'NoneType' object has no attribute 'execute'seen 170× in production logs on 2026-04-23 during atexit.memory_manager._enrichment_loopand_drain_enrichment_queuecatchBackendClosedErrorand exit cleanly. - SQLite torn snapshot (#84, C1 from #83) —
export_snapshot()now usessqlite3.Connection.backup()for a page-consistent copy. The previousshutil.copy2path could produce a corrupt backup missing-wal/-shmsidecars, unsafe for DR restore. - SQLite reindex race (#84, C2 from #83) —
reindex_vector()now uses a single-lock targetedUPDATEon theembedding_vectorcolumn. The previousget_note_by_id → rewrite_notepath spanned two lock acquisitions and could clobber concurrentmark_access_dirty/evolve/ supersede edits viaINSERT OR REPLACE.
Security
-
TypeDB authentication hardening (#82) — removed known-insecure
admin/passworddefaults fromTypeDBConfigandconfig.default.yaml.TypeDBConfig.__repr__now redacts non-empty passwords as***. The config loader resolves${TYPEDB_USERNAME}/${TYPEDB_PASSWORD}env-var references in YAML (same pattern already used forllm.api_key), so credentials can stay in env / container secret stores rather than on disk.Migration: set
TYPEDB_USERNAME/TYPEDB_PASSWORDin your environment or use the${VAR}references in a localconfig.yaml. Direct env overrides (TYPEDB_USERNAME=…) already worked and are unaffected.
Docs
- Architecture Deep Dive + Module Inventory for v2.4.0 (#80) — reference-level architecture documentation.
- RFC-007 Operational Telemetry (#85) — full design doc including the four subagent-resolved frictions (caller-opt-in
query_idcorrelation, narrow-scope latency instrumentation, OCSFunmappedextension, hybrid__new__-bypass integration tests). - Troubleshoot guide (#85) — new "Operational telemetry" subsection covering the three CLI entry points and the privacy contract.
Install
```bash
pip install --upgrade zettelforge
```
Full changelog: https://github.com/rolandpg/zettelforge/blob/master/CHANGELOG.md
v2.4.0 — Detection rules as memory, MCP Registry, SQLite hardening
Detection-rules-as-memory, MCP Registry publication, SQLite concurrency hardening, test-suite hygiene, and brand/docs polish.
Added
- Sigma + YARA as first-class memory entities with LLM rule explainer (#70)
- Detection Rules as Memory README section (#74)
- MCP Registry publication infra — `server.json` and PyPI `mcp-name` tag so ZettelForge can be published to registry.modelcontextprotocol.io (#75)
- Brand & docs polish — neural-chain architecture diagram with light/dark parity, canonical security channels + RFC 9116 `security.txt`, refreshed GitHub social preview (#61)
Fixed
- SQLite reader concurrency — 16 reader methods now hold `_write_lock`, closing a production read-during-write race that could surface as `pydantic.ValidationError` on NULL columns during concurrent enrichment (closes #68, #69)
- 3 CI test regressions stabilized (#67)
Changed
- Test suite hygiene — 280 → 305 passing, 17 → 10 skipped, 2 → 0 xfailed on 3.12. Migrated `langchain_retriever` to Pydantic V2 ConfigDict. Converted 10 CI-skipped LLM tests to the mock provider (RFC-002 Phase 1) (#62, #63, #64, #65)
Install
```bash
pip install --upgrade zettelforge
```
Full changelog: https://github.com/rolandpg/zettelforge/blob/master/CHANGELOG.md
v2.3.0 — Pluggable LLM providers, MCP module, SEO foundations
[2.3.0] - 2026-04-17
Pluggable LLM provider infrastructure (RFC-002 Phase 1), MCP server
as a first-class Python module, PyPI discoverability refresh, SEO
foundations across the docs site, and a full docs-vs-code
reconciliation. All additions are backward-compatible; no existing
API changes. Supersedes the never-tagged 2.2.1 metadata patch —
its PyPI classifier / keyword / image-URL changes are folded in
below.
Added
- Pluggable LLM provider infrastructure (RFC-002 Phase 1) — new
zettelforge.llm_providerspackage with a@runtime_checkable
LLMProviderprotocol, a thread-safe registry, and built-in
providers forlocal(llama-cpp-python),ollama, andmock.
The publicgenerate()signature is unchanged; all 7 existing call
sites (fact_extractor,memory_updater,synthesis_generator,
intent_classifier,note_constructor,entity_indexer,
memory_evolver) keep working without modification. Third-party
providers can register via thezettelforge.llm_providers
entry-point group.openai_compatandanthropicproviders land
in Phase 2 and Phase 3. LLMConfigexpanded — newapi_key,timeout,max_retries,
fallback, andextrafields.api_keysupports${ENV_VAR}
references and is redacted fromrepr(). Sensitive keys inside
extra(matchingkey|token|secret|password|credential|auth) are
redacted as well. New env overrides:ZETTELFORGE_LLM_API_KEY,
ZETTELFORGE_LLM_TIMEOUT,ZETTELFORGE_LLM_MAX_RETRIES,
ZETTELFORGE_LLM_FALLBACK.LLMProviderConfigurationError— new exception surfaced for
non-recoverable provider setup problems (bad API key, missing
optional SDK) sogenerate()can distinguish "try the fallback"
from "stop and report".llm_client.reload()helper — clears the provider registry
and config cache so test suites and long-lived processes can
reconfigure the LLM backend without a process restart.- Hardened .gitignore per GOV-023 — added
.env.*,*.key,
*.pem. - MCP server as a first-class module —
python -m zettelforge.mcp
now works out of apip install zettelforgewith no git clone
required. New packagezettelforge.mcp(withserver.py,
__main__.py, and a console-script entryzettelforge-mcp).
The previous entry point atweb/mcp_server.pyis retained as a
thin backward-compat shim. - Console scripts —
zettelforgeandzettelforge-mcpentry
points added topyproject.toml. - How-to guides — migration (
migrate-jsonl-to-sqlite.md),
benchmark reproduction (reproduce-benchmarks.md), troubleshooting
(troubleshoot.md), and upgrade (upgrade.md). Linked from the
MkDocs nav. - Design and About sections in the docs nav — RFC-001, RFC-002,
RFC-003 and the origin-story narrative are now discoverable from
docs.threatrecall.ai. - RFC-003 design proposal (docs only) — read-path depth routing
with a deterministic Quality Gate plus System 1 / System 2 recall
paths. Ships with an adversarial-review artifact (4 blockers, 13
warnings). No runtime changes yet — implementation deferred. - Archive directory —
docs/archive/holds retired v1.0.0-alpha
snapshots (SKILL.md,PACKAGE_SUMMARY.md) with a README explaining
their provenance. llm_nerconfiguration reference —docs/reference/configuration.md
now documentsllm_ner.enabledand theZETTELFORGE_LLM_NER_ENABLED
environment override.- Docs SEO foundation — per-page canonical URLs, OpenGraph and
Twitter-card metadata, and aSoftwareApplicationJSON-LD block on
the home page via adocs/overrides/main.htmltheme override. The
softwareVersionvalue is sourced fromconfig.extra.versionin
mkdocs.ymlso it stays in sync with releases. - PyPI classifier refresh — added
Topic :: Security(primary
filter security engineers use to browse PyPI) and
Topic :: Software Development :: Libraries :: Python Modules.
ExistingTopic :: Scientific/Engineering :: Artificial Intelligence
retained. Development Status stays at4 - Beta. - PyPI keyword refresh — swapped
agent-memory→agentic-memory
(emerging category keyword) andzettelkasten→llm-memory
(direct intent match for Mem0/Graphiti discovery traffic). Still
10 keywords total; within the PyPI display limit.
Changed
- SECURITY.md — contact updated to
contact@threatrecall.ai,
supported-versions table refreshed to mark2.3.xas current and
2.2.xas the prior minor release; storage section refreshed to
reflect SQLite-by-default. docs/llms.txt— rewritten to match current reality (SQLite
default, 19 runtime entity types, correct GOV-003/007/011/012
descriptions, MCP invocation).- BENCHMARK_REPORT.md — CTIBench ATE row updated (F1 = 0.146);
architecture summary reframed as SQLite + LanceDB default with
TypeDB as an extension;ctibench_results.jsondate bumped. - README — above-fold rewritten (CTA row, keyword density,
PyPI-safe absolute-URL images). Pipeline step 1 entity count
corrected from "10 types" to the 19 typesEntityExtractor
actually recognises. - README image paths —
docs/assets/demo.gifand
docs/assets/zettelforge_architecture.svgrewritten to absolute
raw.githubusercontent.comURLs so the PyPI long description
renders correctly (relative paths 404 on the PyPI CDN). Pinned to
themasterref; can be re-pinned to thev2.3.0tag in the
next release PR if PyPI-side stability matters. docs/superpowers/plans/renamed todocs/superpowers/research/
with a README making clear these are aspirational synthesis, not
roadmap commitments. The stray untrackeddocs/plans/directory
was removed.- Tutorials and governance-controls reference —
last_updated
andversionmetadata refreshed. zettelforge.ontologyexports —TypedEntityStore,
OntologyValidator,get_ontology_store,get_ontology_validator
removed from the top-level__all__(still importable from
zettelforge.ontology). They are a parallel store not wired into
MemoryManager.observability.pyandcache.pyheaders — annotated as
currently unwired; kept for future integration.- OCSF
_PRODUCT_VERSION— sourced from
importlib.metadata.version("zettelforge")instead of a hard-coded
string, so emitted OCSF events stop drifting when__version__
bumps. - OpenGraph
og:type—websiteon the home page,article
elsewhere (was unconditionallyarticle).
Fixed
- OllamaProvider host routing — now instantiates
ollama.Client(host=self._url)so the configured URL actually
takes effect (previously the module-levelollama.generate()call
ignored per-instance host). - Provider registry race —
register()now checks and mutates
under the registry lock, closing a TOCTOU window on concurrent
provider registration. - MCP server lazy instantiation —
MemoryManageris now created
on first tool call rather than at server import time, so--help
and protocol-handshake tests don't pay the model-load cost.
Removed
- Six superseded branches that had already been squash-merged into
master —feat/causal-chain-fix-and-demo-gif,
feat/entity-vocabulary-expansion,
feature/RFC-001-conversational-entity-extractor,
fix/intent-classifier-graph-weight,
fix/p0-production-blockers,feat/remember-evolve.