super-ollama is a terminal-native local LLM tool built on the same inference stack as Ollama. It runs models in process with no HTTP server for day-to-day use: you talk to the model through the super-ollama CLI.
super-ollama ask— one-shot prompt (args or stdin)super-ollama chat— interactive chat (streaming)super-ollama config— show config path and default modelollama— unchanged upstream-style binary forollama serve,ollama pull,ollama run, etc.
Default model is gemma3:1b unless you set default_model in config or pass --model.
make build-super-ollama
# or: go build -o bin/super-ollama ./cmd/super-ollama
./bin/super-ollama ask "Explain what a goroutine is in one paragraph." --model gemma3:1b
./bin/super-ollama chat --model gemma3:1bPull models with the ollama CLI (starts from this repo with go build -o ollama .) while ollama serve is not required for super-ollama ask / chat.
| Location | Purpose |
|---|---|
$XDG_CONFIG_HOME/super-ollama/config.toml |
If XDG_CONFIG_HOME is set |
~/.super-ollama/config.toml |
Otherwise |
Example:
default_model = "gemma3:1b"By default super-ollama logs at WARN on stderr so scheduler noise stays quiet. For full detail:
OLLAMA_DEBUG=1 ./bin/super-ollama ask "hello"go build -o ollama .
make build-super-ollama
go test ./...- GitHub Wiki — roadmap and setup (enable Wiki under repo Settings if the link is empty)
docs/wiki/— Markdown source; publish to GitHub Wiki with./scripts/publish-wiki.shafter you create the first wiki page in the browser (GitHub only provisions the wiki git remote after that)super-ollama-agent-plan.md— full phased product plan
# one-time: open wiki in browser, add a Home page, save
gh browse -w -R Kritarth-Dandapat/ollama-custom
./scripts/publish-wiki.shMIT (inherits upstream licensing for vendored/submodule portions; see repository files for detail).