refactor: remove Ollama and add custom provider configuration#546
Merged
refactor: remove Ollama and add custom provider configuration#546
Conversation
Add ExecutionConfig struct for controlling runtime execution: - max_agent_threads: Maximum concurrent agent threads - max_tool_threads: Maximum concurrent tool executions - command_timeout_secs: Timeout for shell commands - http_timeout_secs: Timeout for HTTP requests - streaming_enabled: Enable/disable streaming responses This allows users to customize execution behavior via config files.
Add CustomProviderConfig and CustomModelConfig types for user-defined LLM providers in config.toml: - base_url: Provider API endpoint - api_type: openai/anthropic/openai-compatible - api_key_env: Environment variable for API key - default_model: Default model for provider - models: Available models with capabilities - headers: Custom HTTP headers Enables users to add their own providers without code changes.
Add CLI arguments for controlling execution behavior: - --max-agent-threads: Maximum concurrent agent threads - --max-tool-threads: Maximum concurrent tool executions - --command-timeout: Timeout for shell commands in seconds - --http-timeout: Timeout for HTTP requests in seconds - --no-streaming: Disable streaming responses These arguments override config file settings at runtime.
Remove Ollama integration from the backend: - Delete cortex-ollama crate entirely - Remove from workspace members and dependencies - Remove Ollama model presets from cortex-common - Remove OllamaEmbedder from cortex-engine - Remove Ollama provider references from CLI utilities - Remove Ollama options from app-server config - Clean up related imports and references Users can now add custom local providers via the new custom providers configuration system.
Remove all Ollama provider references from the GUI: - Delete OllamaProvider.ts implementation - Remove Ollama from AIProvider enum (backend) - Remove Ollama from LLMProviderType union type - Remove Ollama from provider configuration objects - Remove Ollama from model selectors and settings - Remove Ollama from onboarding wizard - Update package-lock.json dependencies The custom providers system now allows users to configure local inference endpoints without built-in Ollama support.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
This PR removes the built-in Ollama provider and adds a flexible custom provider configuration system, allowing users to configure their own LLM providers.
Changes
New Features
Refactoring
Configuration Examples
Execution Config (config.toml)
Custom Provider (config.toml)
CLI Arguments
Testing
Migration Notes
Users currently using Ollama can configure it as a custom provider with: