Skip to content

Releases: rolandpg/zettelforge

v2.6.2 — config editor UI/UX fix

27 Apr 06:13
172ea85

Choose a tag to compare

UI/UX release. Fixes the /config page so the Apply button actually works and surfaces enum-style settings as dropdowns instead of free-text inputs. No data migration. No config changes. No API contract changes.

Fixed

  • /config "Save Changes" button is no longer dead. The Quick Settings panel called saveConfigForm() and reloadConfig() — neither function was defined anywhere, so the button silently no-op'd and the panel rendered "Loading schema..." forever. Replaced with a real form-based editor whose Apply button PUTs a nested payload to /api/config and reloads from server on success.
  • YAML scalar emitter now escapes backslashes correctly. CodeQL #35/#36: replace(/"/g, '\\"') left literal \ untouched, so a value like foo\"bar would have prematurely terminated the scalar. Now escapes \ first, then ".
  • coerce() no longer overwrites numeric settings with 0 on empty input. Clearing a number field now reverts the change rather than queuing 0, which was the previous (silent, surprising) behavior.
  • YAML parser correctly handles list values (e.g. synthesis.tier_filter) when the YAML uses the indented key:\n - item form. Stack frames now carry parentObj/parentKey so the first list child can convert the optimistically-created {} into [] and append correctly.

Added

  • Form-based config editor with dropdowns. /config now renders a grouped settings form alongside the YAML editor. Known enum fields (backend, embedding.provider, llm.provider, llm.local_backend, logging.level, synthesis.default_format, governance.pii.action) render as <select> controls. Restart-required leaves get a "restart required" badge sourced from the same set the server uses, so the UI warning is never out of sync with the server's classification.
  • GET /api/config/meta endpoint. Single source of truth for the UI's "restart required" badge — eliminates server/UI drift on _RESTART_REQUIRED_FIELDS. Both UIs fetch on load with hardcoded fallback for offline-server safety.
  • WAI-ARIA Tabs pattern completed on /config: aria-selected, aria-controls, role=tabpanel, aria-labelledby, roving tabindex, and arrow-key navigation (Left/Right/Home/End).
  • Pending-changes counter and Revert button. The form tracks dirty fields by dotted path, builds a single nested payload on Apply, and shows N pending change(s) (M need restart) next to the buttons.
  • YAML editor accepts both YAML and JSON (was JSON-only despite the label) and skips redacted *** secrets so they aren't PUT back as literal strings.

Tests

31 passed, 2 skipped (was 24 + 2 in 2.6.1). New coverage: enum round-trip for restart-required + live fields, multi-section nested payload from a single Apply, /api/config/meta shape and auth, list-value PUT round-trip, and an HTML regression guard proving the dead saveConfigForm/reloadConfig handlers are gone.


Install: pip install --upgrade zettelforge==2.6.2

Full changelog: https://github.com/rolandpg/zettelforge/blob/master/CHANGELOG.md

v2.6.1 — RFC-015 web GUI hotfix

26 Apr 04:07
53fab2f

Choose a tag to compare

Hotfix release. Resolves three blockers found in code review of the
RFC-015 web GUI shipped in v2.6.0. No data migration. No config changes.

Fixed

  • /config HTML page now renders. _to_dict was defined as a closure
    inside get_config_endpoint, so every render of /config raised
    NameError, was silently swallowed by a bare except, and left the
    YAML body blank on initial server-side render. Promoted to a module-level
    _config_to_dict helper used by both routes. (PR #131)
  • PUT /api/config correctly reports nested restart-required fields.
    The check compared top-level payload keys against a set of dotted-path
    fields, so payloads like {"embedding": {"provider": "x"}} were
    reported as applied: ["embedding"], pending_restart: [], telling
    operators a restart-required change had taken effect when it had not.
    Added _flatten_keys to walk nested payloads to dotted leaf paths;
    applied and pending_restart now contain accurate dotted paths.
    (PR #131)
  • /config HTML route is now auth-gated. /api/config was protected,
    but the HTML shell (and once the _to_dict bug was fixed, its
    server-rendered YAML body) was reachable without an API key. Added
    Depends(require_api_guard) and made the YAML body redact secrets
    before serialization. (PR #131)

Tests

  • Added four regression tests in tests/test_web_api.py covering all
    three fixes. 24 passed, 2 skipped (was 20 + 2).

Upgrade

pip install -U 'zettelforge==2.6.1'

No config or data migration required.

v2.5.2

25 Apr 17:48
385fc7b

Choose a tag to compare

[2.5.2] - 2026-04-25

Hotfix release. Restores end-to-end functionality of synthesis, causal
triple extraction, fact extraction, LLM NER, and neighbor evolution
under any reasoning-style LLM (qwen3.5+, qwen3.6, nemotron-3, etc.).

Fixed

  • Reasoning-model token starvation across every LLM call site.
    Reasoning models emit hidden <think>...</think> tokens that count
    against num_predict but never appear in the final response field
    Ollama returns. Pre-2.5.2 token caps (max_tokens=300/400/800/
    1024) were exhausted entirely by the thinking phase on these
    models, leaving the JSON answer empty. Symptoms: synthesis fell back
    to "No specific answer found for: …" on every query; causal triple
    extraction persisted 0 edges despite rich CTI text; LLM NER
    silently no-opped; neighbor evolution parse_failed{schema=..., raw=""} warnings flooded the log.

    Bumped every generate(..., max_tokens=...) call site to give
    reasoning models room to think and emit a final answer. Affected
    files:

    File Old cap New cap
    note_constructor.py (causal triples) 300 8000
    synthesis_generator.py 800 2500
    fact_extractor.py 400 2500
    entity_indexer.py (NER) 300 2500
    memory_evolver.py (2 sites) 1024 2500

    Causal extraction needs the largest budget because the prompt asks
    the model to enumerate every causal relation in a passage; this
    triggers the longest reasoning chains anywhere in the system.
    Empirical against qwen3.5:9b: at 4000 tokens the call was
    stochastically sufficient (eval_count varied 2.8k–4k+, ~70%
    success), so 8000 is the conservative cap that keeps the success
    rate above 95% on the same model. Other call sites converge with
    less reasoning overhead so 2500 suffices.

  • LLM client timeout bumped 60s → 180s. LLMConfig.timeout and
    OllamaProvider constructor default were both 60 seconds — well
    below the 60–120s wall-clock time of a 4000–8000 token reasoning
    generation on a 9B-Q4_K_M model. ReadTimeout was firing during
    causal extraction even when the model would have returned valid
    JSON given another 30 seconds. Bumped both defaults plus
    config.default.yaml to 180s.

    Verified end-to-end on qwen3.5:9b:

    • Synthesis: query "What CVE does DROPBEAR exploit?" returns
      "CVE-2024-3094" with 1 source citation (was returning
      "No specific answer found for: …" on every call pre-2.5.2).
    • Causal extraction: corpus seeded with APT28/DROPBEAR/CVE-2024-3094
      text yields a 4-triple JSON array in 137s wall time:
      APT28 → targets → manufacturing sector,
      APT28 → uses → DROPBEAR,
      DROPBEAR → exploits → CVE-2024-3094,
      APT28 → attributed_to → Russian GRU Unit 26165.

Operational note

Slow models. With 8000 tokens of reasoning budget, single causal
extraction calls now take 60–140s on a 9B model. remember(sync=True)
in this configuration will block 1–3 minutes per note. The default
async path (background enrichment queue) is the preferred mode.
Operators on faster hardware or smaller models can lower the caps via
config/env if needed, but the v2.5.2 defaults trade latency for
end-to-end correctness on the reference model.

Notes

This explains the evolution_parse_failed and causal_triples parse_failed cascades documented in the v2.4.x Vigil incident. The
v2.4.2 PR #95 Tier 1/2 LLM observability surfaced the empty responses
but the root-cause attribution to token-cap-vs-thinking-budget waited
until the v2.5.1 perf-bench run made the failure reproducible end-to-end.

v2.5.1

25 Apr 16:41
824ed96

Choose a tag to compare

[2.5.1] - 2026-04-25

Hotfix release. Surfaced during the v2.5.0 perf benchmark run.

Fixed

  • KnowledgeGraph._cache_edge crashed on legacy-schema edges.
    Long-running deployments accumulated kg_edges.jsonl entries written
    by a now-removed pre-v2.5.x writer that used
    {source_id, target_id, relation_type} instead of the canonical
    {from_node_id, to_node_id, relationship} keys. The loader hard-failed
    with KeyError: 'from_node_id' on the first such row, taking down
    every recall() and synthesize() that touches the KG. Affects any
    workspace with mixed-schema edge history; observed locally with 189k
    edges where ~80k were the legacy shape.
    _normalize_edge_schema() now remaps legacy keys to canonical on load
    and silently drops entries that are still un-normalizable, with a
    count logged at WARNING so operators can see the skip volume.
    Six new regression tests in tests/test_kg_edge_schema.py cover
    pass-through, remap, missing-fields, non-dict, mixed-batch, and
    corrupt-JSON cases. The previously-broken environment-dependent
    test_basic.py::test_ingest_relationship now passes deterministically.

v2.5.0

25 Apr 16:17
4be6c5d

Choose a tag to compare

[2.5.0] - 2026-04-25

Compliance-driven minor release. Closes every CRITICAL and HIGH audit
finding except H-3 (mypy strict) and the ANN slice of H-1, both of
which need per-module ratchet plans. Also adds two new optional LLM
backends, a Presidio PII detector, and supply-chain hardening.

Added

  • RFC-011 — Local LLM backend selection (#104). New local_backend
    config knob picks between llama-cpp-python (GGUF) and
    onnxruntime-genai (ONNX) at runtime. Both ship as optional extras
    (pip install zettelforge[local] or [local-onnx]).
  • RFC-012 — LiteLLM unified provider (#108). Routes to 100+
    upstream LLM providers via the LiteLLM SDK. Optional extra
    (pip install zettelforge[litellm]); the base package never imports
    it unless the SDK is present.
  • RFC-013 — Microsoft Presidio PII detection (#118). Optional PII
    validator with three policies (log / redact / block),
    configurable via governance.pii.*. CTI allowlist excludes
    IP_ADDRESS / URL / DOMAIN_NAME from detection so legitimate
    threat-intel indicators flow through unmodified. Soft dependency —
    pip install zettelforge[pii] to activate; the base package never
    imports presidio_analyzer unless the SDK is present.
  • GOV-009 Snyk SCA + SAST declared in controls.yaml (#114). The
    spec-drift validator now walks every .github/workflows/*.yml so
    controls whose CI step lives outside ci.yml (Snyk's separate
    workflow) can be honestly declared.
  • GOV-006 solo-maintainer compensating controls (#117). New
    controls.yaml entry pins the existing CI gates (lint, tests,
    governance spec-drift) as compensating controls for the GOV-006
    two-person review rule that cannot be physically satisfied with one
    human maintainer. CODEOWNERS updated with explanatory comment.
  • SECURITY.md + CODEOWNERS added to the repo root for vulnerability
    disclosure and review attribution.

Changed

  • All GitHub Actions are now SHA-pinned (audit H-5 hardening). Every
    uses: org/repo@vX reference replaced with uses: org/repo@<full-sha> # vX.Y.Z
    to prevent supply-chain attacks via tag rewrites.
  • Ruff rule set ratcheted to GOV-003 §"Tooling and Automation" minus
    ANN
    (#106 + #107 + #109 + #111 + #113). Active select list:
    {E, F, I, W, N, T20, B, UP, SIM, RUF, S}. Per-line # noqa: SXXX
    annotations document each accepted exception (best-effort fallbacks,
    non-crypto RNG, ?-bound SQL with constant column lists).
    RUF002/RUF003 ignored globally for stylistic en-dash and ×.
  • CI install-step shell precedence fixed (#112). The
    pip install -e ".[dev]" || pip install -e "." && pip install pytest...
    chain parsed as (A || B) && C, so the pytest install ran on
    every success path including when [dev] already provided pytest.
    Wrapped the fallback in parentheses.
  • CONTRIBUTING.md accuracy (#115). Documents ruff format
    (project hasn't used black for a while) and lists what CI actually
    enforces so new contributors have a green-build target.

Compliance audit closure (tasks/compliance-audit-2026-04-25.md)

Severity Finding Status
CRITICAL C-1 branch protection CLOSED (with required status checks)
CRITICAL C-2 fabricated no_hardcoded_secrets claim CLOSED (#100)
HIGH H-1 ruff full select per GOV-003 CLOSED for {E,F,I,W,N,T20,B,UP,SIM,RUF,S}; ANN ratcheting per-module
HIGH H-2 coverage threshold not enforced CLOSED (#100)
HIGH H-4 GOV-006 / CODEOWNERS solo-maintainer CLOSED on the zettelforge side (#117); GOV-006 doc amendment in rolandpg/governance repo is separate scope
HIGH H-5 SCA gate + SHA-pinned actions CLOSED (#102 + #114 + SHA-pin commit)
MEDIUM M-1 bare except: in production CLOSED (#100)
MEDIUM M-3 OCSF timezone_offset field CLOSED (#100)
LOW L-4 CI install-step shell precedence CLOSED (#112)

Outstanding: H-3 (mypy --strict in CI; needs per-module ratchet plan
for 393 errors across 38 files), M-2 (rewrite GOV-016 to match the
YAML-frontmatter practice already in use), M-4 (lock file), H-1 ANN
ratchet (121 findings across 38 files).

v2.4.3 — OCSF version self-correct + log-level env var + fastembed preload

25 Apr 02:02

Choose a tag to compare

Patch release preparing v2.4.3 for Growth Week 2 launch. Three instrumentation and developer-experience improvements.

Highlights

  • feat: OCSF version self-correct (#96) — ocsf_version field now dynamically resolved from the installed ocsf-schema package at import time rather than hardcoded. Eliminates version skew after OCSF package updates.
  • feat: ZETTELFORGE_LOG_LEVEL env var (#96) — structlog threshold respects the environment variable, consistent with other ZETTELFORGE_* overrides.
  • feat: Fastembed preload (#96) — Embedding model loaded eagerly on first MemoryManager init rather than on first recall, reducing remember() latency on cold-start queries.

Upgrade action (Vigil)

Bump Vigil's ZettelForge pin to 2.4.3 and restart. The ZETTELFORGE_LOG_LEVEL env var lets you silence DEBUG noise in production without code changes.

See CHANGELOG.md for the full set of changes.

v2.4.2 — RFC-010 hotfix + RFC-009 Phase 0.5

24 Apr 22:35
b576007

Choose a tag to compare

Patch release bundling the RFC-010 enrichment-pipeline hotfix with the RFC-009 Phase 0.5 latency-attribution instrumentation and preliminary attribution artifact. Full response to the 2026-04-24 Vigil telemetry audit.

Highlights

  • fix(enrichment): RFC-010 — OllamaProvider timeout plumbing + consolidation shutdown race guard (#88)
  • feat(telemetry): RFC-009 Phase 0.5 — per-phase timers in remember() via phase_timings_ms (#90)
  • docs: Phase 0.5 preliminary attribution — 98.4% of remember() wall-clock is LanceDB notes_cti writes; 7,356 uncompacted fragments identified (#91)

Honest scope note

This release does NOT yet address:

  • The ~2,329 daily enrichment-job drops — those are driven by HTTP 200 + empty Ollama responses, not by hangs. Fix ships in RFC-009 Phases 1–3 (v2.5.0: durable outbox + circuit breaker).
  • LanceDB fragment accumulation — identified here but not fixed here. RFC-009 is being revised to add periodic compaction to Phase 1 scope.

Upgrade action (Vigil)

Bump Vigil's ZettelForge pin to 2.4.2, restart, run ~1h of representative CTI traffic. The new phase_timings_ms will be emitted inside ocsf_api_activity events in zettelforge.log — this refines or falsifies the preliminary Phase 0.5 attribution in docs/superpowers/research/2026-04-24-phase-0.5-attribution-prelim.md.

See CHANGELOG.md for the full set of changes.

v2.4.1

24 Apr 05:10
ad0908f

Choose a tag to compare

Operational telemetry (RFC-007), SQLite backend fixes, and TypeDB authentication hardening.

Added

  • Operational telemetry (RFC-007) (#85) — new per-query metrics stream at ~/.amem/telemetry/telemetry_YYYY-MM-DD.jsonl when ZETTELFORGE_LOG_LEVEL=DEBUG. Five shipped components:

    • TelemetryCollector class with start_query / log_recall / log_synthesis / log_feedback / auto_feedback_from_synthesis. INFO/DEBUG-gated field sets, thread-safe JSONL append, 1-hour TTL on in-memory query context.
    • MemoryManager integrationrecall() and synthesize() gain a non-breaking actor= kwarg. OCSF events extended via the sanctioned unmapped object with zf_ prefix (class_uid 6002 compliant). Narrow-scope perf_counter deltas capture vector_latency_ms and graph_latency_ms.
    • Daily aggregator (python -m zettelforge.scripts.telemetry_aggregator) emitting a DailyMetrics JSON report (latency averages, tier distribution, unused-notes count, top-utility notes).
    • Human evaluation workflow — 6-question rubric (docs/human-evaluation-rubric.md), sampler (python -m zettelforge.scripts.human_eval_sampler) that selects 20 random briefings as a fill-in Markdown template, and a --write-events path to append event_type: "human_eval" entries back to the telemetry log.
    • Optional Streamlit dashboard (streamlit run src/zettelforge/scripts/telemetry_dashboard.py) — query volume, latency p50/p95/max, tier distribution, utility trend, unused-notes warning.

    Privacy contract: raw note content never persisted (IDs / tiers / source_types / domains only); query text truncated at 200 chars INFO / 500 chars DEBUG; local-only, no network calls.

Fixed

  • SQLite shutdown NPE (#84, closes the H3 finding from issue #83) — close() and initialize() are now lock-protected and idempotent. Readers and writers raise a clean BackendClosedError (new, in storage_backend) instead of the opaque AttributeError: 'NoneType' object has no attribute 'execute' seen 170× in production logs on 2026-04-23 during atexit. memory_manager._enrichment_loop and _drain_enrichment_queue catch BackendClosedError and exit cleanly.
  • SQLite torn snapshot (#84, C1 from #83) — export_snapshot() now uses sqlite3.Connection.backup() for a page-consistent copy. The previous shutil.copy2 path could produce a corrupt backup missing -wal / -shm sidecars, unsafe for DR restore.
  • SQLite reindex race (#84, C2 from #83) — reindex_vector() now uses a single-lock targeted UPDATE on the embedding_vector column. The previous get_note_by_id → rewrite_note path spanned two lock acquisitions and could clobber concurrent mark_access_dirty / evolve / supersede edits via INSERT OR REPLACE.

Security

  • TypeDB authentication hardening (#82) — removed known-insecure admin / password defaults from TypeDBConfig and config.default.yaml. TypeDBConfig.__repr__ now redacts non-empty passwords as ***. The config loader resolves ${TYPEDB_USERNAME} / ${TYPEDB_PASSWORD} env-var references in YAML (same pattern already used for llm.api_key), so credentials can stay in env / container secret stores rather than on disk.

    Migration: set TYPEDB_USERNAME / TYPEDB_PASSWORD in your environment or use the ${VAR} references in a local config.yaml. Direct env overrides (TYPEDB_USERNAME=…) already worked and are unaffected.

Docs

  • Architecture Deep Dive + Module Inventory for v2.4.0 (#80) — reference-level architecture documentation.
  • RFC-007 Operational Telemetry (#85) — full design doc including the four subagent-resolved frictions (caller-opt-in query_id correlation, narrow-scope latency instrumentation, OCSF unmapped extension, hybrid __new__-bypass integration tests).
  • Troubleshoot guide (#85) — new "Operational telemetry" subsection covering the three CLI entry points and the privacy contract.

Install

```bash
pip install --upgrade zettelforge
```

Full changelog: https://github.com/rolandpg/zettelforge/blob/master/CHANGELOG.md

v2.4.0 — Detection rules as memory, MCP Registry, SQLite hardening

19 Apr 04:01
4c38543

Choose a tag to compare

Detection-rules-as-memory, MCP Registry publication, SQLite concurrency hardening, test-suite hygiene, and brand/docs polish.

Added

  • Sigma + YARA as first-class memory entities with LLM rule explainer (#70)
  • Detection Rules as Memory README section (#74)
  • MCP Registry publication infra — `server.json` and PyPI `mcp-name` tag so ZettelForge can be published to registry.modelcontextprotocol.io (#75)
  • Brand & docs polish — neural-chain architecture diagram with light/dark parity, canonical security channels + RFC 9116 `security.txt`, refreshed GitHub social preview (#61)

Fixed

  • SQLite reader concurrency — 16 reader methods now hold `_write_lock`, closing a production read-during-write race that could surface as `pydantic.ValidationError` on NULL columns during concurrent enrichment (closes #68, #69)
  • 3 CI test regressions stabilized (#67)

Changed

  • Test suite hygiene — 280 → 305 passing, 17 → 10 skipped, 2 → 0 xfailed on 3.12. Migrated `langchain_retriever` to Pydantic V2 ConfigDict. Converted 10 CI-skipped LLM tests to the mock provider (RFC-002 Phase 1) (#62, #63, #64, #65)

Install

```bash
pip install --upgrade zettelforge
```

Full changelog: https://github.com/rolandpg/zettelforge/blob/master/CHANGELOG.md

v2.3.0 — Pluggable LLM providers, MCP module, SEO foundations

17 Apr 17:58
dee4f7b

Choose a tag to compare

[2.3.0] - 2026-04-17

Pluggable LLM provider infrastructure (RFC-002 Phase 1), MCP server
as a first-class Python module, PyPI discoverability refresh, SEO
foundations across the docs site, and a full docs-vs-code
reconciliation. All additions are backward-compatible; no existing
API changes. Supersedes the never-tagged 2.2.1 metadata patch —
its PyPI classifier / keyword / image-URL changes are folded in
below.

Added

  • Pluggable LLM provider infrastructure (RFC-002 Phase 1) — new
    zettelforge.llm_providers package with a @runtime_checkable
    LLMProvider protocol, a thread-safe registry, and built-in
    providers for local (llama-cpp-python), ollama, and mock.
    The public generate() signature is unchanged; all 7 existing call
    sites (fact_extractor, memory_updater, synthesis_generator,
    intent_classifier, note_constructor, entity_indexer,
    memory_evolver) keep working without modification. Third-party
    providers can register via the zettelforge.llm_providers
    entry-point group. openai_compat and anthropic providers land
    in Phase 2 and Phase 3.
  • LLMConfig expanded — new api_key, timeout, max_retries,
    fallback, and extra fields. api_key supports ${ENV_VAR}
    references and is redacted from repr(). Sensitive keys inside
    extra (matching key|token|secret|password|credential|auth) are
    redacted as well. New env overrides: ZETTELFORGE_LLM_API_KEY,
    ZETTELFORGE_LLM_TIMEOUT, ZETTELFORGE_LLM_MAX_RETRIES,
    ZETTELFORGE_LLM_FALLBACK.
  • LLMProviderConfigurationError — new exception surfaced for
    non-recoverable provider setup problems (bad API key, missing
    optional SDK) so generate() can distinguish "try the fallback"
    from "stop and report".
  • llm_client.reload() helper — clears the provider registry
    and config cache so test suites and long-lived processes can
    reconfigure the LLM backend without a process restart.
  • Hardened .gitignore per GOV-023 — added .env.*, *.key,
    *.pem.
  • MCP server as a first-class modulepython -m zettelforge.mcp
    now works out of a pip install zettelforge with no git clone
    required. New package zettelforge.mcp (with server.py,
    __main__.py, and a console-script entry zettelforge-mcp).
    The previous entry point at web/mcp_server.py is retained as a
    thin backward-compat shim.
  • Console scriptszettelforge and zettelforge-mcp entry
    points added to pyproject.toml.
  • How-to guides — migration (migrate-jsonl-to-sqlite.md),
    benchmark reproduction (reproduce-benchmarks.md), troubleshooting
    (troubleshoot.md), and upgrade (upgrade.md). Linked from the
    MkDocs nav.
  • Design and About sections in the docs nav — RFC-001, RFC-002,
    RFC-003 and the origin-story narrative are now discoverable from
    docs.threatrecall.ai.
  • RFC-003 design proposal (docs only) — read-path depth routing
    with a deterministic Quality Gate plus System 1 / System 2 recall
    paths. Ships with an adversarial-review artifact (4 blockers, 13
    warnings). No runtime changes yet — implementation deferred.
  • Archive directorydocs/archive/ holds retired v1.0.0-alpha
    snapshots (SKILL.md, PACKAGE_SUMMARY.md) with a README explaining
    their provenance.
  • llm_ner configuration referencedocs/reference/configuration.md
    now documents llm_ner.enabled and the ZETTELFORGE_LLM_NER_ENABLED
    environment override.
  • Docs SEO foundation — per-page canonical URLs, OpenGraph and
    Twitter-card metadata, and a SoftwareApplication JSON-LD block on
    the home page via a docs/overrides/main.html theme override. The
    softwareVersion value is sourced from config.extra.version in
    mkdocs.yml so it stays in sync with releases.
  • PyPI classifier refresh — added Topic :: Security (primary
    filter security engineers use to browse PyPI) and
    Topic :: Software Development :: Libraries :: Python Modules.
    Existing Topic :: Scientific/Engineering :: Artificial Intelligence
    retained. Development Status stays at 4 - Beta.
  • PyPI keyword refresh — swapped agent-memoryagentic-memory
    (emerging category keyword) and zettelkastenllm-memory
    (direct intent match for Mem0/Graphiti discovery traffic). Still
    10 keywords total; within the PyPI display limit.

Changed

  • SECURITY.md — contact updated to contact@threatrecall.ai,
    supported-versions table refreshed to mark 2.3.x as current and
    2.2.x as the prior minor release; storage section refreshed to
    reflect SQLite-by-default.
  • docs/llms.txt — rewritten to match current reality (SQLite
    default, 19 runtime entity types, correct GOV-003/007/011/012
    descriptions, MCP invocation).
  • BENCHMARK_REPORT.md — CTIBench ATE row updated (F1 = 0.146);
    architecture summary reframed as SQLite + LanceDB default with
    TypeDB as an extension; ctibench_results.json date bumped.
  • README — above-fold rewritten (CTA row, keyword density,
    PyPI-safe absolute-URL images). Pipeline step 1 entity count
    corrected from "10 types" to the 19 types EntityExtractor
    actually recognises.
  • README image pathsdocs/assets/demo.gif and
    docs/assets/zettelforge_architecture.svg rewritten to absolute
    raw.githubusercontent.com URLs so the PyPI long description
    renders correctly (relative paths 404 on the PyPI CDN). Pinned to
    the master ref; can be re-pinned to the v2.3.0 tag in the
    next release PR if PyPI-side stability matters.
  • docs/superpowers/plans/ renamed to docs/superpowers/research/
    with a README making clear these are aspirational synthesis, not
    roadmap commitments. The stray untracked docs/plans/ directory
    was removed.
  • Tutorials and governance-controls referencelast_updated
    and version metadata refreshed.
  • zettelforge.ontology exportsTypedEntityStore,
    OntologyValidator, get_ontology_store, get_ontology_validator
    removed from the top-level __all__ (still importable from
    zettelforge.ontology). They are a parallel store not wired into
    MemoryManager.
  • observability.py and cache.py headers — annotated as
    currently unwired; kept for future integration.
  • OCSF _PRODUCT_VERSION — sourced from
    importlib.metadata.version("zettelforge") instead of a hard-coded
    string, so emitted OCSF events stop drifting when __version__
    bumps.
  • OpenGraph og:typewebsite on the home page, article
    elsewhere (was unconditionally article).

Fixed

  • OllamaProvider host routing — now instantiates
    ollama.Client(host=self._url) so the configured URL actually
    takes effect (previously the module-level ollama.generate() call
    ignored per-instance host).
  • Provider registry raceregister() now checks and mutates
    under the registry lock, closing a TOCTOU window on concurrent
    provider registration.
  • MCP server lazy instantiationMemoryManager is now created
    on first tool call rather than at server import time, so --help
    and protocol-handshake tests don't pay the model-load cost.

Removed

  • Six superseded branches that had already been squash-merged into
    master — feat/causal-chain-fix-and-demo-gif,
    feat/entity-vocabulary-expansion,
    feature/RFC-001-conversational-entity-extractor,
    fix/intent-classifier-graph-weight,
    fix/p0-production-blockers, feat/remember-evolve.