Releases: dotcommander/piglet
Releases · dotcommander/piglet
v0.23.1
Full Changelog: v0.23.0...v0.23.1
v0.23.0
Full Changelog: v0.22.8...v0.23.0
v0.22.8
Full Changelog: v0.22.7...v0.22.8
v0.22.7
Full Changelog: v0.22.6...v0.22.7
v0.22.6
Refactor: extract large files into focused modules across all packages
- ext: split app.go into bus.go, idle.go, interceptor.go, notify.go, toolset.go, unregister.go
- sdk: split sdk.go into dispatch.go, handlers.go, notify.go, register.go, registrations.go, run.go
- session: extract append.go, list.go, replay.go, tree.go, types.go from session.go
- provider: extract openai_types.go, openai_parse.go from openai.go
- extensions: split register.go files in cron, inbox, memory, plan, route, repomap, webfetch, changelog, recall, session-tools
- shell/tui/tool/prompt: extract actions.go, handlers.go, helpers.go, undo.go
v0.22.4
What's Changed
feat(probe): server-aware loaded model detection for local inference servers
- llama.cpp router: parse
status.valuenested field (loaded/loading/unloaded) from/v1/modelsresponse - Ollama: probe
/api/psfor warm (in-memory) models before falling back to the full model list - LM Studio:
statefield parsing already existed; now shares the samebestResultselection path - vLLM and llamacpp: added to
detectServerTypeandIsLocalProvider - Consolidated selection logic:
bestResulthelper replaces three copies of the same preference loop — prefers loaded chat models, falls back to unloaded, then embedding fallback - Removed: non-standard
mlx psCLI dependency from local provider detection - Tests: new cases for llama.cpp router status, Ollama
/api/pswarm-model preference, loaded-model priority, and no-state fallback
chore(deps): update mcp-go to v0.47.1
v0.22.3
Full Changelog: v0.22.2...v0.22.3
v0.22.2
Full Changelog: v0.22.1...v0.22.2
v0.22.1
What's Changed
Documentation improvements for local model users.
Local Models
- Added Local Models section to README with quick-start port syntax (
piglet --model :1234) - Added Ollama to the providers table
- Split API Keys section in getting-started.md into Cloud Provider and Local Model paths
- Updated prerequisites to note that a local model server is an alternative to an API key
Progressive Tool Disclosure
- Added Progressive Tool Disclosure section to docs/providers.md with full explanation of compact mode, the activation flow, and configuration
- Clarified
deferredToolsNoteconfig option (local models only) - Removed duplicate
compactKeepRecentrow from configuration.md agent table - Added
deferredToolsNoteto the full config example
Provider Table
- Expanded server table with detection notes (LM Studio via
owned_by, Ollama via headers) - Added embedding model filtering note
v0.22.0
Full Changelog: v0.21.0...v0.22.0