Add Ollama support for local LLM inference#44
Open
schereroz wants to merge 1 commit intoviperrcrypto:mainfrom
Open
Add Ollama support for local LLM inference#44schereroz wants to merge 1 commit intoviperrcrypto:mainfrom
schereroz wants to merge 1 commit intoviperrcrypto:mainfrom
Conversation
Adds full Ollama integration so users can run Siftly with local models
(llama3.1, mistral, etc.) — completely free, no API keys needed.
Changes:
- lib/settings.ts: New AIProvider type ('anthropic'|'openai'|'ollama'),
getOllamaModel(), getOllamaBaseUrl() with caching
- lib/ai-client.ts: resolveOllamaClient() using OpenAI-compatible API,
OpenAIAIClient now accepts provider parameter
- app/api/settings/route.ts: GET/POST handlers for ollamaModel & ollamaBaseUrl
- app/api/settings/test/route.ts: Ollama connection test with friendly errors
- app/api/settings/cli-status/route.ts: Ollama availability check via /api/tags
- app/settings/page.tsx: Three-tab provider toggle (Anthropic/OpenAI/Ollama),
OllamaSettingsPanel with auto-detected models dropdown, base URL config,
and connection test button
- app/api/categorize/route.ts, search/ai/route.ts, analyze/images/route.ts,
lib/categorizer.ts: Handle 'ollama' provider in API key resolution
- .env.example: Document OLLAMA_BASE_URL
- CLAUDE.md: Ollama setup instructions
https://claude.ai/code/session_01HFSAKuoayERDciumSrRUyt
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds support for Ollama as a third AI provider option, enabling users to run local LLMs completely offline without API keys. Users can now switch between Anthropic, OpenAI, and Ollama in Settings.
Changes
AIProvidertype supporting'anthropic' | 'openai' | 'ollama'getOllamaModel()andgetOllamaBaseUrl()functions with 5-minute cachingOllamaSettingsPanelcomponent with model selection, base URL configuration, and connection testingProviderToggleto include Ollama button alongside Anthropic and OpenAIresolveOllamaClient()function that creates an OpenAI SDK instance pointing to Ollama's/v1endpointOpenAIAIClientto support Ollama as a provider variantresolveAIClient()to instantiate Ollama client when provider is 'ollama'/api/settings: Added Ollama model and base URL persistence/api/settings/cli-status: Added Ollama availability detection via/api/tagsendpoint, returns available models/api/settings/test: Added Ollama connection test with friendly error messagesRelated Issues
Checklist
npx tsc --noEmitpasseshttps://claude.ai/code/session_01HFSAKuoayERDciumSrRUyt