Skip to content

Releases: dotcommander/piglet

v0.23.1

24 Apr 17:11

Choose a tag to compare

v0.23.0

22 Apr 04:11

Choose a tag to compare

Full Changelog: v0.22.8...v0.23.0

v0.22.8

18 Apr 16:48

Choose a tag to compare

Full Changelog: v0.22.7...v0.22.8

v0.22.7

10 Apr 11:12

Choose a tag to compare

Full Changelog: v0.22.6...v0.22.7

v0.22.6

09 Apr 23:45

Choose a tag to compare

Refactor: extract large files into focused modules across all packages

  • ext: split app.go into bus.go, idle.go, interceptor.go, notify.go, toolset.go, unregister.go
  • sdk: split sdk.go into dispatch.go, handlers.go, notify.go, register.go, registrations.go, run.go
  • session: extract append.go, list.go, replay.go, tree.go, types.go from session.go
  • provider: extract openai_types.go, openai_parse.go from openai.go
  • extensions: split register.go files in cron, inbox, memory, plan, route, repomap, webfetch, changelog, recall, session-tools
  • shell/tui/tool/prompt: extract actions.go, handlers.go, helpers.go, undo.go

v0.22.4

09 Apr 19:30

Choose a tag to compare

What's Changed

feat(probe): server-aware loaded model detection for local inference servers

  • llama.cpp router: parse status.value nested field (loaded / loading / unloaded) from /v1/models response
  • Ollama: probe /api/ps for warm (in-memory) models before falling back to the full model list
  • LM Studio: state field parsing already existed; now shares the same bestResult selection path
  • vLLM and llamacpp: added to detectServerType and IsLocalProvider
  • Consolidated selection logic: bestResult helper replaces three copies of the same preference loop — prefers loaded chat models, falls back to unloaded, then embedding fallback
  • Removed: non-standard mlx ps CLI dependency from local provider detection
  • Tests: new cases for llama.cpp router status, Ollama /api/ps warm-model preference, loaded-model priority, and no-state fallback

chore(deps): update mcp-go to v0.47.1

v0.22.3

09 Apr 19:11

Choose a tag to compare

Full Changelog: v0.22.2...v0.22.3

v0.22.2

09 Apr 18:44

Choose a tag to compare

Full Changelog: v0.22.1...v0.22.2

v0.22.1

09 Apr 18:21

Choose a tag to compare

What's Changed

Documentation improvements for local model users.

Local Models

  • Added Local Models section to README with quick-start port syntax (piglet --model :1234)
  • Added Ollama to the providers table
  • Split API Keys section in getting-started.md into Cloud Provider and Local Model paths
  • Updated prerequisites to note that a local model server is an alternative to an API key

Progressive Tool Disclosure

  • Added Progressive Tool Disclosure section to docs/providers.md with full explanation of compact mode, the activation flow, and configuration
  • Clarified deferredToolsNote config option (local models only)
  • Removed duplicate compactKeepRecent row from configuration.md agent table
  • Added deferredToolsNote to the full config example

Provider Table

  • Expanded server table with detection notes (LM Studio via owned_by, Ollama via headers)
  • Added embedding model filtering note

v0.22.0

09 Apr 17:08

Choose a tag to compare

Full Changelog: v0.21.0...v0.22.0