Fix: read LLM config from nao_config.yaml as a fallback#421
Fix: read LLM config from nao_config.yaml as a fallback#421d-axel-b wants to merge 2 commits intogetnao:mainfrom
Conversation
The UI backend was ignoring the `llm` section in nao_config.yaml, even though it already reads the same file for database connections. This caused "The selected model could not be resolved" errors after a DB reset, forcing users to re-enter their API key through the UI settings. Add nao_config.yaml as a third fallback (after DB and env vars) in resolveProviderSettings, resolveProviderModel, and getProjectAvailableModels. Add 13 unit tests covering the fallback priority chain for all three functions.
There was a problem hiding this comment.
1 issue found across 2 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="apps/backend/src/utils/llm.ts">
<violation number="1" location="apps/backend/src/utils/llm.ts:198">
P1: Unvalidated provider value from nao_config.yaml can cause runtime errors when deriving model metadata. Add runtime validation to ensure the provider value is a valid LlmProvider before using it in getDefaultModelId/getModelName.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
|
Just to be sure, how did you launch the UI that it ignored the settings? Was it in dev or prod? Currently we decided to make the UI/backend unaware of the |
|
Hello @Bl3f - I was running it through What's is disturbing is that I did the configuration of the databases / datawarehouses throught Just to be sure to understand. What are you planning to keep in nao_config and what do you prefer to keep in the nao db ? |
|
ANy update on this one :) |
|
Sorry @d-axel-b, forgot to answer! So yes, when running through The reason here is to keep the context lib + nao_config.yaml and the agentic loop separated and only ENV variables are doing the bridge. When it comes to what goes into nao config vs. db actually nao config should contain everything that is related to getting the context whereas the db is for everything related to the agentic loop. In this case LLM config is a bit in both place because to pull context and use ai annotations or use nao test you will need a LLM key. Anyway. Would love to chat with you about this if you want esp. as it will change a bit when going multi project (I'm working on it) Your PR could be merged but it I'm not sure it simplifies the understanding of where to put LLM keys when working locally. |
Summary
llmsection innao_config.yaml, causing "The selected model could not be resolved"errors whenever the internal SQLite DB was reset (e.g. after rebasing
from upstream)
even though it was already declared in
nao_config.yamlWhat changed
apps/backend/src/utils/llm.ts— addednao_config.yamlas a thirdfallback in the credential resolution chain:
Priority: DB config → env vars →
nao_config.yamlThe three affected functions are:
resolveProviderSettings— used by web search toolsresolveProviderModel— used by the agent on every chat messagegetProjectAvailableModels— used to populate the model selector in the UIapps/backend/tests/llm.test.ts— 13 unit tests covering:nao_config.yamlnao_config.yamlwhen provider doesn't matchgetProjectAvailableModelswhen provider is already coveredTest plan
npx vitest run tests/llm.test.ts— 13 tests passnpm run lint— no errorsllminnao_config.yaml— model resolves correctly and chat works