refactor: split dictation cleanup from agent invocations + inference scope & provider registries#677
Open
gabrielste1n wants to merge 2 commits intomainfrom
Open
refactor: split dictation cleanup from agent invocations + inference scope & provider registries#677gabrielste1n wants to merge 2 commits intomainfrom
gabrielste1n wants to merge 2 commits intomainfrom
Conversation
… introduce inference scope and provider registries - Add `PROMPT_KINDS` registry (cleanup, dictationAgent, chatAgent) with one store-backed `customPrompts` record and a single `resolvePrompt(kind, opts)` read path. Migration runs only if legacy `customUnifiedPrompt` / `agentSystemPrompt` keys exist; fresh installs are no-ops. - Add `INFERENCE_SCOPES` map (dictationCleanup, dictationAgent, noteFormatting, chatIntelligence) with `selectResolvedLLMConfig` and `setResolvedLLMConfig`. Fallback semantics (noteFormatting → dictationCleanup) baked into the scope definition. - Wake-word agent split: new dictationAgent scope + store fields + 4th LLM settings tab + DictationAgentSettings panel. audioManager routes to the dictationAgent model + prompt when the wake word fires AND a model is configured; falls back to existing single-model behavior otherwise. - Provider registry: extract 7 of 8 providers (Anthropic, Gemini, Groq, Local, Enterprise, OpenWhispr, LAN) into `services/ai/inferenceProviders/` with a shared `InferenceProvider` interface. ReasoningService dispatcher shrinks from a 40-line switch to a registry lookup. OpenAI/custom path stays inline pending endpoint-discovery state relocation. - PromptStudio gets a `kind` prop so it can edit any prompt kind. AgentModeSettings textarea now reactive via store-backed customPrompts. - detectAgentName extracted to `config/agentDetection.ts`.
…figEditor Rename the cleanup/note-formatting/chat-agent LLM settings to scope-specific names (cleanup*, noteFormatting*, chatAgent*) so the four inference scopes are distinct in storage, env vars, and IPC. Adds a one-time localStorage migration and env-var fallbacks for legacy keys. Extracts the shared mode-selector / model-picker UI from four near-identical copies into InferenceConfigEditor scoped by InferenceScope, and pulls processWithOpenAI out of ReasoningService into a standalone openai provider behind PROVIDER_REGISTRY.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Three layered abstractions that together let the wake-word agent path use its own model and prompt, separate from text cleanup.
1. Prompt registry (`src/config/prompts/`)
2. Inference scope resolver (`src/config/inferenceScopes.ts`)
3. Dictation-agent split (user-facing feature)
4. Provider registry (`src/services/ai/inferenceProviders/`)
Backward compatibility
Test plan
Follow-ups (deliberately out of scope)