Releases: idep-editor/idep
v0.0.4
Added
- LSP document synchronization:
didOpen,didChange,didSave,didClosenotifications - LSP completions:
textDocument/completionrequest builder with full round-trip support CompletionParamsconstruction with URI, position, and context- Completion response parsing for
CompletionListandCompletionItem[] Buffer::apply_completion()withtextEditrange handling (delete range before insert)Buffer::apply_text_edit()for proper LSP range replacement- Completion ranking: sort by
sort_text(server intent), then label length, deterministic deduplication viaBTreeMap completion.rsmodule for bridging LSP results to buffer- rust-analyzer integration test for real completion requests
- Document sync test with Python mock server (gated by python3 availability)
Fixed
- textEdit range bug: Completions now properly delete the specified range before inserting, preventing doubled text (e.g., "fn fo" + "fn foo" → "fn foo")
- Dead code shadow in
update_cursor(removed variable re-declaration) - Cursor positioning: clamp to last character index (line_len - 1), accounting for trailing newlines
Changed
- Moved
rank_completionsfromLspClienttocompletion.rsmodule (better organization) - Completion ranking now respects
sort_textfield per LSP spec - Updated Claude workflow to use
claude-code-action@v1with custom prompt and model settings - Expanded
CLAUDE.mdwith full SDLC coverage, SemVer guidance, and deployment context
v0.0.3 — LSP Client Lifecycle
Highlights
- LSP client lifecycle: spawn, shutdown (graceful + force-kill timeout), restart with backoff, stderr capture.
- JSON-RPC transport over stdio: Content-Length framing, pending response tracking, notification broadcast; round-trip and malformed-message tests.
- Initialize handshake helper: client capabilities, stored InitializeResult; rust-analyzer integration test gated by RUN_RA_INT=1.
- WSL2 path handling: Windows ↔ /mnt/ URI normalization, round-trip tests; rust-analyzer path integration scaffold gated by RUN_WSL_RA_TEST=1.
- CI: installs rust-analyzer component; initialize→shutdown sequence test.
Technical notes
- rust-analyzer integration test gate: set RUN_RA_INT=1.
- WSL integration test gate: set RUN_WSL_RA_TEST=1 (requires /mnt/c and rust-analyzer).
v0.0.2
Idep v0.0.2 — AI Works End-to-End
Released: 2026-03-12
The AI layer is now fully operational. Completions flow end-to-end from keypress through
the completion engine to Ollama and back. The core buffer primitives are in place.
This is the foundation everything else builds on.
What's New
AI Completions
CompletionEnginewired tollm-lsLSP bridge- FIM (fill-in-the-middle) support for three model families:
- DeepSeek Coder —
<|fim▁begin|>/<|fim▁hole|>/<|fim▁end|> - StarCoder2 —
<fim_prefix>/<fim_suffix>/<fim_middle> - CodeLlama —
<PRE>/<SUF>/<MID>
- DeepSeek Coder —
- Model-specific stop sequences — completions halt at function boundaries
- Post-processing truncation via
truncate_on_stop()as fallback - Debounce logic — configurable, default 300ms
- Ollama backend:
raw: true+temperature: 0.0for all completion requests
AI Chat
- Streaming token callback restored to
ChatSession::send() - Debounce wired through to chat context
Core Buffer
Buffer::insert(pos, text)Buffer::delete(range)Buffer::lines() -> impl IteratorBuffer::to_string()- Cursor position tracking
Workspace::open_file(path) -> BufferWorkspace::save_file(path, buffer)- File watcher (
notifycrate) → triggersIndexer::reindex_fileon save - Full unit test coverage for all buffer operations
Validated
Tested against deepseek-coder:1.3b running locally via Ollama on WSL2:
fn add(a: i32, b: i32) -> i32 {
// ← model fills this
}
Model output: a + b — correct, idiomatic, stopped at }\n. 7 tokens, ~550ms.
See [ollama-smoke-test.md](docs/testing/ollama-smoke-test.md) for the full validation procedure.
What's Next — v0.0.3
LSP client goes in next. Target: Week 5.
idep-lspclient lifecycle:initialize→initialized→shutdown- Spawn
rust-analyzerandtypescript-language-server textDocument/completionbridged toCompletionEnginellm-lswired as virtual LSP for AI completions- tree-sitter AST chunking for the indexer (replaces naive line chunking)
fastembed-rslocal embeddings +usearchin-process vector search
Notes
- No binary release — editor UI does not exist yet. Binaries ship at v0.1.0-alpha.
- GitHub Sponsors is live (quiet launch, no announcement).
- The
ollama_backendintegration test currently uses a mock server for CI speed.
Runcargo test -- --ignoredlocally to hit a real Ollama instance.
Built in Bali 🌴 — [idep.dev](https://idep.dev) · [GitHub](https://github.com/idep-editor/idep)
v0.0.1
v0.0.1 — Repo Foundation
Think in code. Own your tools.
This is the foundation release. No binary, no editor UI — but a clean, tested base to build on in public.
What's in this release
Cargo workspace — five crates scaffolded: idep-core, idep-ai, idep-lsp, idep-plugin, idep-index.
AI backend layer — all four backends implemented, tested, and hardened:
OllamaBackend— local, no API keyAnthropicBackendHuggingFaceBackendOpenAiCompatBackend— Groq, Together, LM Studio, anything OpenAI-compatible
All backends include retry with exponential backoff and rate-limit handling.
BackendInfo / info() trait method — runtime diagnostics: which backend is active, whether it's cloud-dependent, whether auth is required.
Config schema — ~/.config/idep/config.toml with XDG resolution. Switch backends by changing one line. Full reference in config.example.toml.
Project docs — SECURITY.md, SUSTAINABILITY.md, CONTRIBUTING.md, CHANGELOG.md, CI, pre-commit hooks.
What's not here yet
The editor UI does not exist. Completions are not wired end-to-end. This is a source-only release.
See TODO.md for what's next.
Next: v0.0.2
Ollama completions working end-to-end — the first thing you'll actually be able to run.
Built in Bali 🌴 — idep.dev