Skip to content

Releases: idep-editor/idep

v0.0.4

18 Mar 02:02

Choose a tag to compare

Added

  • LSP document synchronization: didOpen, didChange, didSave, didClose notifications
  • LSP completions: textDocument/completion request builder with full round-trip support
  • CompletionParams construction with URI, position, and context
  • Completion response parsing for CompletionList and CompletionItem[]
  • Buffer::apply_completion() with textEdit range handling (delete range before insert)
  • Buffer::apply_text_edit() for proper LSP range replacement
  • Completion ranking: sort by sort_text (server intent), then label length, deterministic deduplication via BTreeMap
  • completion.rs module for bridging LSP results to buffer
  • rust-analyzer integration test for real completion requests
  • Document sync test with Python mock server (gated by python3 availability)

Fixed

  • textEdit range bug: Completions now properly delete the specified range before inserting, preventing doubled text (e.g., "fn fo" + "fn foo" → "fn foo")
  • Dead code shadow in update_cursor (removed variable re-declaration)
  • Cursor positioning: clamp to last character index (line_len - 1), accounting for trailing newlines

Changed

  • Moved rank_completions from LspClient to completion.rs module (better organization)
  • Completion ranking now respects sort_text field per LSP spec
  • Updated Claude workflow to use claude-code-action@v1 with custom prompt and model settings
  • Expanded CLAUDE.md with full SDLC coverage, SemVer guidance, and deployment context

v0.0.3 — LSP Client Lifecycle

13 Mar 15:33

Choose a tag to compare

Highlights

  • LSP client lifecycle: spawn, shutdown (graceful + force-kill timeout), restart with backoff, stderr capture.
  • JSON-RPC transport over stdio: Content-Length framing, pending response tracking, notification broadcast; round-trip and malformed-message tests.
  • Initialize handshake helper: client capabilities, stored InitializeResult; rust-analyzer integration test gated by RUN_RA_INT=1.
  • WSL2 path handling: Windows ↔ /mnt/ URI normalization, round-trip tests; rust-analyzer path integration scaffold gated by RUN_WSL_RA_TEST=1.
  • CI: installs rust-analyzer component; initialize→shutdown sequence test.

Technical notes

  • rust-analyzer integration test gate: set RUN_RA_INT=1.
  • WSL integration test gate: set RUN_WSL_RA_TEST=1 (requires /mnt/c and rust-analyzer).

v0.0.2

11 Mar 18:41

Choose a tag to compare

Idep v0.0.2 — AI Works End-to-End

Released: 2026-03-12

The AI layer is now fully operational. Completions flow end-to-end from keypress through
the completion engine to Ollama and back. The core buffer primitives are in place.
This is the foundation everything else builds on.


What's New

AI Completions

  • CompletionEngine wired to llm-ls LSP bridge
  • FIM (fill-in-the-middle) support for three model families:
    • DeepSeek Coder<|fim▁begin|> / <|fim▁hole|> / <|fim▁end|>
    • StarCoder2<fim_prefix> / <fim_suffix> / <fim_middle>
    • CodeLlama<PRE> / <SUF> / <MID>
  • Model-specific stop sequences — completions halt at function boundaries
  • Post-processing truncation via truncate_on_stop() as fallback
  • Debounce logic — configurable, default 300ms
  • Ollama backend: raw: true + temperature: 0.0 for all completion requests

AI Chat

  • Streaming token callback restored to ChatSession::send()
  • Debounce wired through to chat context

Core Buffer

  • Buffer::insert(pos, text)
  • Buffer::delete(range)
  • Buffer::lines() -> impl Iterator
  • Buffer::to_string()
  • Cursor position tracking
  • Workspace::open_file(path) -> Buffer
  • Workspace::save_file(path, buffer)
  • File watcher (notify crate) → triggers Indexer::reindex_file on save
  • Full unit test coverage for all buffer operations

Validated

Tested against deepseek-coder:1.3b running locally via Ollama on WSL2:

fn add(a: i32, b: i32) -> i32 {
    // ← model fills this
}

Model output: a + b — correct, idiomatic, stopped at }\n. 7 tokens, ~550ms.

See [ollama-smoke-test.md](docs/testing/ollama-smoke-test.md) for the full validation procedure.


What's Next — v0.0.3

LSP client goes in next. Target: Week 5.

  • idep-lsp client lifecycle: initializeinitializedshutdown
  • Spawn rust-analyzer and typescript-language-server
  • textDocument/completion bridged to CompletionEngine
  • llm-ls wired as virtual LSP for AI completions
  • tree-sitter AST chunking for the indexer (replaces naive line chunking)
  • fastembed-rs local embeddings + usearch in-process vector search

Notes

  • No binary release — editor UI does not exist yet. Binaries ship at v0.1.0-alpha.
  • GitHub Sponsors is live (quiet launch, no announcement).
  • The ollama_backend integration test currently uses a mock server for CI speed.
    Run cargo test -- --ignored locally to hit a real Ollama instance.

Built in Bali 🌴 — [idep.dev](https://idep.dev) · [GitHub](https://github.com/idep-editor/idep)

v0.0.1

11 Mar 12:49
bb32fce

Choose a tag to compare

v0.0.1 — Repo Foundation

Think in code. Own your tools.

This is the foundation release. No binary, no editor UI — but a clean, tested base to build on in public.


What's in this release

Cargo workspace — five crates scaffolded: idep-core, idep-ai, idep-lsp, idep-plugin, idep-index.

AI backend layer — all four backends implemented, tested, and hardened:

  • OllamaBackend — local, no API key
  • AnthropicBackend
  • HuggingFaceBackend
  • OpenAiCompatBackend — Groq, Together, LM Studio, anything OpenAI-compatible

All backends include retry with exponential backoff and rate-limit handling.

BackendInfo / info() trait method — runtime diagnostics: which backend is active, whether it's cloud-dependent, whether auth is required.

Config schema~/.config/idep/config.toml with XDG resolution. Switch backends by changing one line. Full reference in config.example.toml.

Project docsSECURITY.md, SUSTAINABILITY.md, CONTRIBUTING.md, CHANGELOG.md, CI, pre-commit hooks.


What's not here yet

The editor UI does not exist. Completions are not wired end-to-end. This is a source-only release.

See TODO.md for what's next.


Next: v0.0.2

Ollama completions working end-to-end — the first thing you'll actually be able to run.


Built in Bali 🌴 — idep.dev