fix: report openAI connection as closed if models cannot be retrieved#1015
fix: report openAI connection as closed if models cannot be retrieved#1015jeffmaury wants to merge 3 commits intokortex-hub:mainfrom
Conversation
📝 WalkthroughWalkthroughTrack provider connection status locally in the OpenAI-compatible provider: fetch models on restore is wrapped in try/catch, failures set the connection status to Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
extensions/openai-compatible/src/openAI.ts (1)
121-128:⚠️ Potential issue | 🟡 MinorPre-existing bug in
removeConnectionInfo:ConnectionInfo[]is joined without mapping to strings, corrupting secret storage for multi-connection scenarios.
tokens.filter(...)returns aConnectionInfo[]. Calling.join(TOKEN_SEPARATOR)on it invokestoString()on each object, producing"[object Object],[object Object]"rather than the"apiKey|baseURL"format. This corrupts the stored token string whenever there is more than one connection after deletion. (The single-connection case accidentally works because filtering the only entry yields[], and[].join(...)returns'', whichgetConnectionInfostreats as an empty list.)
saveConnectionInfoat Line 109 correctly mapsConnectionInfoobjects to\${t.apiKey}|${t.baseURL}`strings before joining —removeConnectionInfo` needs the same step.🐛 Proposed fix
private async removeConnectionInfo(token: string, baseURL: string): Promise<void> { // get existing tokens const tokens = await this.getConnectionInfos(); // filter out the token - const raw = tokens.filter(t => t.apiKey !== token || t.baseURL !== baseURL).join(TOKEN_SEPARATOR); + const raw = tokens + .filter(t => t.apiKey !== token || t.baseURL !== baseURL) + .map(t => `${t.apiKey}${INFO_SEPARATOR}${t.baseURL}`) + .join(TOKEN_SEPARATOR); // save to secret storage await this.secrets.store(TOKENS_KEY, raw); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@extensions/openai-compatible/src/openAI.ts` around lines 121 - 128, removeConnectionInfo currently joins ConnectionInfo objects directly, causing "[object Object]" to be stored; update removeConnectionInfo (the method named removeConnectionInfo) to map the filtered tokens to the same string format used in saveConnectionInfo (e.g., `${t.apiKey}|${t.baseURL}`) before calling .join(TOKEN_SEPARATOR), then store that string with this.secrets.store(TOKENS_KEY, raw); keep the existing filter logic that removes the entry where both apiKey and baseURL match the provided token and baseURL.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@extensions/openai-compatible/src/openAI.ts`:
- Around line 161-168: The code never sets status to 'started' after a
successful listModels call and also swallows errors; update the block in
registerInferenceProviderConnection so that after models = await
this.listModels(baseURL, token); you set status = 'started', and in the catch
(err) block log the error (using the existing logger or console) and set status
= 'stopped' then rethrow the error so
restoreConnections/registerInferenceProviderConnection's outer catch can observe
and log the failure; reference symbols: status, models, listModels,
ProviderConnectionStatus, registerInferenceProviderConnection,
restoreConnections.
---
Outside diff comments:
In `@extensions/openai-compatible/src/openAI.ts`:
- Around line 121-128: removeConnectionInfo currently joins ConnectionInfo
objects directly, causing "[object Object]" to be stored; update
removeConnectionInfo (the method named removeConnectionInfo) to map the filtered
tokens to the same string format used in saveConnectionInfo (e.g.,
`${t.apiKey}|${t.baseURL}`) before calling .join(TOKEN_SEPARATOR), then store
that string with this.secrets.store(TOKENS_KEY, raw); keep the existing filter
logic that removes the entry where both apiKey and baseURL match the provided
token and baseURL.
| let models: InferenceModel[] = []; | ||
| let status: ProviderConnectionStatus = 'unknown'; | ||
|
|
||
| try { | ||
| models = await this.listModels(baseURL, token); | ||
| } catch (err: unknown) { | ||
| status = 'stopped'; | ||
| } |
There was a problem hiding this comment.
status is never set to 'started' on the success path — working connections will always report 'unknown'.
The status variable is initialized to 'unknown' and is only reassigned in the catch block (to 'stopped'). When listModels succeeds, status remains 'unknown', so every healthy connection is reported to the UI as unknown rather than started. The ProviderConnectionStatus type includes 'started' precisely for this case.
Additionally, the catch block silently discards the error without any logging. Previously, when listModels threw, registerInferenceProviderConnection itself would throw, and restoreConnections' outer try/catch would log the failure. With this change, model-fetch errors are swallowed silently — reducing observability on connection restoration failures.
🐛 Proposed fix
let models: InferenceModel[] = [];
let status: ProviderConnectionStatus = 'unknown';
try {
models = await this.listModels(baseURL, token);
+ status = 'started';
} catch (err: unknown) {
+ console.error(`openai: failed to list models for baseURL ${baseURL}`, err);
status = 'stopped';
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@extensions/openai-compatible/src/openAI.ts` around lines 161 - 168, The code
never sets status to 'started' after a successful listModels call and also
swallows errors; update the block in registerInferenceProviderConnection so that
after models = await this.listModels(baseURL, token); you set status =
'started', and in the catch (err) block log the error (using the existing logger
or console) and set status = 'stopped' then rethrow the error so
restoreConnections/registerInferenceProviderConnection's outer catch can observe
and log the failure; reference symbols: status, models, listModels,
ProviderConnectionStatus, registerInferenceProviderConnection,
restoreConnections.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
extensions/openai-compatible/src/openAI.spec.ts (2)
248-266:⚠️ Potential issue | 🟡 MinorStale test name contradicts the updated assertion.
The test is named
'should restore a **single** connection from secret storage on init'but now assertsregisterInferenceProviderConnectionwas called twice. The name documents the old (discarded) behavior and will mislead future maintainers.Additionally, neither this test nor the "should restore multiple connections" test at line 229 verifies that the failed connection is actually registered with a
'stopped'status — which is the core of the PR's fix. Consider assertingcall.status()on the relevant registered call.✏️ Suggested name and status assertion
- test('should restore a single connection from secret storage on init', async () => { + test('should register all connections even if model fetch fails, marking failed ones as stopped', async () => { vi.mocked(SECRET_STORAGE_MOCK.get).mockResolvedValue('key1|http://a/v1,key2|http://b/v1'); fetchMock.mockResolvedValueOnce({ status: 500, json: async () => ({}) }); const openai = new OpenAI(PROVIDER_API_MOCK, SECRET_STORAGE_MOCK); await openai.init(); - // two connections should be attempted expect(PROVIDER_MOCK.registerInferenceProviderConnection).toHaveBeenCalledTimes(2); + + // first connection (500 response) should be registered as stopped + const failedCall = vi.mocked(PROVIDER_MOCK.registerInferenceProviderConnection).mock.calls[0][0]; + expect(failedCall.status()).toBe('stopped'); + + // second connection (200 response) should be registered as started/unknown + const successCall = vi.mocked(PROVIDER_MOCK.registerInferenceProviderConnection).mock.calls[1][0]; + expect(successCall.status()).not.toBe('stopped');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@extensions/openai-compatible/src/openAI.spec.ts` around lines 248 - 266, Rename the test 'should restore a single connection from secret storage on init' to reflect that it expects two restored connections (e.g., 'should restore multiple connections from secret storage on init'), and add an assertion that the registration call for the failed provider shows a stopped status: after creating OpenAI with SECRET_STORAGE_MOCK and calling openai.init(), inspect the captured calls to PROVIDER_MOCK.registerInferenceProviderConnection and assert the relevant call's status() is 'stopped' (use the same call object reference from the mocked registerInferenceProviderConnection result) to verify the PR's fix.
269-311:⚠️ Potential issue | 🟡 MinorTest names are inverted relative to the assertions, and key behavioral outcomes are unverified.
All three test names say
"…should throw"but the assertions now use.resolves.toBe(undefined). This is directly misleading — rename them to reflect the new non-throwing behavior.Beyond naming, these tests do not assert two important outcomes of the new behavior:
- Whether
registerInferenceProviderConnectionis called (and if so, whetherstatus()returns'stopped').- Whether
SECRET_STORAGE_MOCK.storeis called — if credentials are persisted even when model fetching fails via the factorycreatepath, that could silently accumulate bad entries.✏️ Proposed naming and assertion additions
- test('non-200 response should throw', async () => { + test('non-200 response should resolve without error and register connection as stopped', async () => { ... await expect(create({ 'openai.factory.apiKey': 'k', 'openai.factory.baseURL': 'http://x/v1' })).resolves.toBe( undefined, ); + const call = vi.mocked(PROVIDER_MOCK.registerInferenceProviderConnection).mock.calls[0]?.[0]; + expect(call?.status()).toBe('stopped'); }); - test('missing data field should throw', async () => { + test('missing data field should resolve without error and register connection as stopped', async () => { ... await expect(create({ 'openai.factory.apiKey': 'k', 'openai.factory.baseURL': 'http://x/v1' })).resolves.toBe( undefined, ); + const call = vi.mocked(PROVIDER_MOCK.registerInferenceProviderConnection).mock.calls[0]?.[0]; + expect(call?.status()).toBe('stopped'); }); - test('data not array should throw', async () => { + test('data not array should resolve without error and register connection as stopped', async () => { ... await expect(create({ 'openai.factory.apiKey': 'k', 'openai.factory.baseURL': 'http://x/v1' })).resolves.toBe( undefined, ); + const call = vi.mocked(PROVIDER_MOCK.registerInferenceProviderConnection).mock.calls[0]?.[0]; + expect(call?.status()).toBe('stopped'); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@extensions/openai-compatible/src/openAI.spec.ts` around lines 269 - 311, Rename the three tests so names reflect they resolve (e.g., "non-200 response should return undefined" etc.) and add assertions that verify the factory's create path behaves correctly: after calling the create function (obtained from PROVIDER_MOCK.setInferenceProviderConnectionFactory -> create) assert whether PROVIDER_MOCK.registerInferenceProviderConnection was invoked and, if invoked, that the returned connection's status() === 'stopped'; also assert SECRET_STORAGE_MOCK.store was or was not called as appropriate to ensure credentials are not silently persisted on failed model fetch. Use the OpenAI init flow to obtain create, and reference create, registerInferenceProviderConnection, SECRET_STORAGE_MOCK.store and the connection.status() in your expects.
🧹 Nitpick comments (1)
extensions/openai-compatible/src/openAI.spec.ts (1)
156-185: Happy-pathstatus()return value is not verified.Line 181 only checks that
call.statusis a function. With the new locally-trackedProviderConnectionStatus, the test should also assert the return value ofcall.status()to confirm the happy path yields a non-'stopped'status (e.g.,'started'). Otherwise there is no unit coverage distinguishing a successful from a failed connection at the status level.✏️ Suggested assertion
expect(call.status).toEqual(expect.any(Function)); + expect(call.status()).not.toBe('stopped');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@extensions/openai-compatible/src/openAI.spec.ts` around lines 156 - 185, The test currently only asserts that call.status is a function but doesn't verify the returned ProviderConnectionStatus; update the happy-path test in openAI.spec.ts (the test for create / registerInferenceProviderConnection) to call call.status() and assert it returns a non-'stopped' value (for example assert it equals 'started' or matches ProviderConnectionStatus.started) so the successful connection status is covered; ensure you reference the same mocked registration call (vi.mocked(PROVIDER_MOCK.registerInferenceProviderConnection).mock.calls[0][0]) and add the assertion alongside the existing checks for call.name, call.sdk, call.models and call.credentials().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@extensions/openai-compatible/src/openAI.spec.ts`:
- Around line 248-266: Rename the test 'should restore a single connection from
secret storage on init' to reflect that it expects two restored connections
(e.g., 'should restore multiple connections from secret storage on init'), and
add an assertion that the registration call for the failed provider shows a
stopped status: after creating OpenAI with SECRET_STORAGE_MOCK and calling
openai.init(), inspect the captured calls to
PROVIDER_MOCK.registerInferenceProviderConnection and assert the relevant call's
status() is 'stopped' (use the same call object reference from the mocked
registerInferenceProviderConnection result) to verify the PR's fix.
- Around line 269-311: Rename the three tests so names reflect they resolve
(e.g., "non-200 response should return undefined" etc.) and add assertions that
verify the factory's create path behaves correctly: after calling the create
function (obtained from PROVIDER_MOCK.setInferenceProviderConnectionFactory ->
create) assert whether PROVIDER_MOCK.registerInferenceProviderConnection was
invoked and, if invoked, that the returned connection's status() === 'stopped';
also assert SECRET_STORAGE_MOCK.store was or was not called as appropriate to
ensure credentials are not silently persisted on failed model fetch. Use the
OpenAI init flow to obtain create, and reference create,
registerInferenceProviderConnection, SECRET_STORAGE_MOCK.store and the
connection.status() in your expects.
---
Nitpick comments:
In `@extensions/openai-compatible/src/openAI.spec.ts`:
- Around line 156-185: The test currently only asserts that call.status is a
function but doesn't verify the returned ProviderConnectionStatus; update the
happy-path test in openAI.spec.ts (the test for create /
registerInferenceProviderConnection) to call call.status() and assert it returns
a non-'stopped' value (for example assert it equals 'started' or matches
ProviderConnectionStatus.started) so the successful connection status is
covered; ensure you reference the same mocked registration call
(vi.mocked(PROVIDER_MOCK.registerInferenceProviderConnection).mock.calls[0][0])
and add the assertion alongside the existing checks for call.name, call.sdk,
call.models and call.credentials().
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
|
moving to draft as it needs to solve conflicts |
Fixes kortex-hub#768 Signed-off-by: Jeff MAURY <jmaury@redhat.com>
Signed-off-by: Jeff MAURY <jmaury@redhat.com>
Signed-off-by: Jeff MAURY <jmaury@redhat.com>
MarsKubeX
left a comment
There was a problem hiding this comment.
LGTM. thanks for the fix.
To test:
Fixes #768