Skip to content

fix: report openAI connection as closed if models cannot be retrieved#1015

Open
jeffmaury wants to merge 3 commits intokortex-hub:mainfrom
jeffmaury:GH-768
Open

fix: report openAI connection as closed if models cannot be retrieved#1015
jeffmaury wants to merge 3 commits intokortex-hub:mainfrom
jeffmaury:GH-768

Conversation

@jeffmaury
Copy link
Contributor

To test:

  • creates an OpenAI connection with a dummy URL and token
  • Verifies the connection is listed and can be deleted

Fixes #768

@jeffmaury jeffmaury requested a review from a team as a code owner February 19, 2026 08:46
@jeffmaury jeffmaury requested review from MarsKubeX, SoniaSandler and bmahabirbu and removed request for a team February 19, 2026 08:46
@coderabbitai
Copy link

coderabbitai bot commented Feb 19, 2026

📝 Walkthrough

Walkthrough

Track provider connection status locally in the OpenAI-compatible provider: fetch models on restore is wrapped in try/catch, failures set the connection status to stopped, and the status accessor now returns the tracked status; tests adjusted to reflect changed restoration/error handling behavior.

Changes

Cohort / File(s) Summary
OpenAI provider logic
extensions/openai-compatible/src/openAI.ts
Added InferenceModel import; introduced a local status variable (initialized 'unknown'); wrapped initial models fetch in try/catch and set status = 'stopped' on failure; status accessor now returns the tracked status instead of a constant.
Tests — openai-compatible
extensions/openai-compatible/src/openAI.spec.ts
Updated tests to expect two restoration attempts when multiple secrets exist; changed error-path expectations so several listModels failure modes resolve without throwing, altering expectations around create/restore sequencing.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title directly reflects the main change: reporting an OpenAI connection as closed when models cannot be retrieved, which is the core fix for handling invalid connections.
Description check ✅ Passed The description is related to the changeset and references issue #768, providing testing context about creating and deleting OpenAI connections.
Linked Issues check ✅ Passed The PR addresses the core requirement from issue #768: making invalid connections visible and removable by reporting them as 'stopped' when model retrieval fails, enabling proper restoration and UI handling.
Out of Scope Changes check ✅ Passed All changes are directly related to the linked issue: modifying connection status tracking and error handling for OpenAI connections, with corresponding test updates to validate the new behavior.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
extensions/openai-compatible/src/openAI.ts (1)

121-128: ⚠️ Potential issue | 🟡 Minor

Pre-existing bug in removeConnectionInfo: ConnectionInfo[] is joined without mapping to strings, corrupting secret storage for multi-connection scenarios.

tokens.filter(...) returns a ConnectionInfo[]. Calling .join(TOKEN_SEPARATOR) on it invokes toString() on each object, producing "[object Object],[object Object]" rather than the "apiKey|baseURL" format. This corrupts the stored token string whenever there is more than one connection after deletion. (The single-connection case accidentally works because filtering the only entry yields [], and [].join(...) returns '', which getConnectionInfos treats as an empty list.)

saveConnectionInfo at Line 109 correctly maps ConnectionInfo objects to \${t.apiKey}|${t.baseURL}`strings before joining —removeConnectionInfo` needs the same step.

🐛 Proposed fix
  private async removeConnectionInfo(token: string, baseURL: string): Promise<void> {
    // get existing tokens
    const tokens = await this.getConnectionInfos();
    // filter out the token
-   const raw = tokens.filter(t => t.apiKey !== token || t.baseURL !== baseURL).join(TOKEN_SEPARATOR);
+   const raw = tokens
+     .filter(t => t.apiKey !== token || t.baseURL !== baseURL)
+     .map(t => `${t.apiKey}${INFO_SEPARATOR}${t.baseURL}`)
+     .join(TOKEN_SEPARATOR);
    // save to secret storage
    await this.secrets.store(TOKENS_KEY, raw);
  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/openai-compatible/src/openAI.ts` around lines 121 - 128,
removeConnectionInfo currently joins ConnectionInfo objects directly, causing
"[object Object]" to be stored; update removeConnectionInfo (the method named
removeConnectionInfo) to map the filtered tokens to the same string format used
in saveConnectionInfo (e.g., `${t.apiKey}|${t.baseURL}`) before calling
.join(TOKEN_SEPARATOR), then store that string with
this.secrets.store(TOKENS_KEY, raw); keep the existing filter logic that removes
the entry where both apiKey and baseURL match the provided token and baseURL.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@extensions/openai-compatible/src/openAI.ts`:
- Around line 161-168: The code never sets status to 'started' after a
successful listModels call and also swallows errors; update the block in
registerInferenceProviderConnection so that after models = await
this.listModels(baseURL, token); you set status = 'started', and in the catch
(err) block log the error (using the existing logger or console) and set status
= 'stopped' then rethrow the error so
restoreConnections/registerInferenceProviderConnection's outer catch can observe
and log the failure; reference symbols: status, models, listModels,
ProviderConnectionStatus, registerInferenceProviderConnection,
restoreConnections.

---

Outside diff comments:
In `@extensions/openai-compatible/src/openAI.ts`:
- Around line 121-128: removeConnectionInfo currently joins ConnectionInfo
objects directly, causing "[object Object]" to be stored; update
removeConnectionInfo (the method named removeConnectionInfo) to map the filtered
tokens to the same string format used in saveConnectionInfo (e.g.,
`${t.apiKey}|${t.baseURL}`) before calling .join(TOKEN_SEPARATOR), then store
that string with this.secrets.store(TOKENS_KEY, raw); keep the existing filter
logic that removes the entry where both apiKey and baseURL match the provided
token and baseURL.

Comment on lines +161 to +168
let models: InferenceModel[] = [];
let status: ProviderConnectionStatus = 'unknown';

try {
models = await this.listModels(baseURL, token);
} catch (err: unknown) {
status = 'stopped';
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

status is never set to 'started' on the success path — working connections will always report 'unknown'.

The status variable is initialized to 'unknown' and is only reassigned in the catch block (to 'stopped'). When listModels succeeds, status remains 'unknown', so every healthy connection is reported to the UI as unknown rather than started. The ProviderConnectionStatus type includes 'started' precisely for this case.

Additionally, the catch block silently discards the error without any logging. Previously, when listModels threw, registerInferenceProviderConnection itself would throw, and restoreConnections' outer try/catch would log the failure. With this change, model-fetch errors are swallowed silently — reducing observability on connection restoration failures.

🐛 Proposed fix
  let models: InferenceModel[] = [];
  let status: ProviderConnectionStatus = 'unknown';

  try {
    models = await this.listModels(baseURL, token);
+   status = 'started';
  } catch (err: unknown) {
+   console.error(`openai: failed to list models for baseURL ${baseURL}`, err);
    status = 'stopped';
  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/openai-compatible/src/openAI.ts` around lines 161 - 168, The code
never sets status to 'started' after a successful listModels call and also
swallows errors; update the block in registerInferenceProviderConnection so that
after models = await this.listModels(baseURL, token); you set status =
'started', and in the catch (err) block log the error (using the existing logger
or console) and set status = 'stopped' then rethrow the error so
restoreConnections/registerInferenceProviderConnection's outer catch can observe
and log the failure; reference symbols: status, models, listModels,
ProviderConnectionStatus, registerInferenceProviderConnection,
restoreConnections.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
extensions/openai-compatible/src/openAI.spec.ts (2)

248-266: ⚠️ Potential issue | 🟡 Minor

Stale test name contradicts the updated assertion.

The test is named 'should restore a **single** connection from secret storage on init' but now asserts registerInferenceProviderConnection was called twice. The name documents the old (discarded) behavior and will mislead future maintainers.

Additionally, neither this test nor the "should restore multiple connections" test at line 229 verifies that the failed connection is actually registered with a 'stopped' status — which is the core of the PR's fix. Consider asserting call.status() on the relevant registered call.

✏️ Suggested name and status assertion
-  test('should restore a single connection from secret storage on init', async () => {
+  test('should register all connections even if model fetch fails, marking failed ones as stopped', async () => {
     vi.mocked(SECRET_STORAGE_MOCK.get).mockResolvedValue('key1|http://a/v1,key2|http://b/v1');
     fetchMock.mockResolvedValueOnce({ status: 500, json: async () => ({}) });

     const openai = new OpenAI(PROVIDER_API_MOCK, SECRET_STORAGE_MOCK);
     await openai.init();

-    // two connections should be attempted
     expect(PROVIDER_MOCK.registerInferenceProviderConnection).toHaveBeenCalledTimes(2);
+
+    // first connection (500 response) should be registered as stopped
+    const failedCall = vi.mocked(PROVIDER_MOCK.registerInferenceProviderConnection).mock.calls[0][0];
+    expect(failedCall.status()).toBe('stopped');
+
+    // second connection (200 response) should be registered as started/unknown
+    const successCall = vi.mocked(PROVIDER_MOCK.registerInferenceProviderConnection).mock.calls[1][0];
+    expect(successCall.status()).not.toBe('stopped');
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/openai-compatible/src/openAI.spec.ts` around lines 248 - 266,
Rename the test 'should restore a single connection from secret storage on init'
to reflect that it expects two restored connections (e.g., 'should restore
multiple connections from secret storage on init'), and add an assertion that
the registration call for the failed provider shows a stopped status: after
creating OpenAI with SECRET_STORAGE_MOCK and calling openai.init(), inspect the
captured calls to PROVIDER_MOCK.registerInferenceProviderConnection and assert
the relevant call's status() is 'stopped' (use the same call object reference
from the mocked registerInferenceProviderConnection result) to verify the PR's
fix.

269-311: ⚠️ Potential issue | 🟡 Minor

Test names are inverted relative to the assertions, and key behavioral outcomes are unverified.

All three test names say "…should throw" but the assertions now use .resolves.toBe(undefined). This is directly misleading — rename them to reflect the new non-throwing behavior.

Beyond naming, these tests do not assert two important outcomes of the new behavior:

  1. Whether registerInferenceProviderConnection is called (and if so, whether status() returns 'stopped').
  2. Whether SECRET_STORAGE_MOCK.store is called — if credentials are persisted even when model fetching fails via the factory create path, that could silently accumulate bad entries.
✏️ Proposed naming and assertion additions
-  test('non-200 response should throw', async () => {
+  test('non-200 response should resolve without error and register connection as stopped', async () => {
     ...
     await expect(create({ 'openai.factory.apiKey': 'k', 'openai.factory.baseURL': 'http://x/v1' })).resolves.toBe(
       undefined,
     );
+    const call = vi.mocked(PROVIDER_MOCK.registerInferenceProviderConnection).mock.calls[0]?.[0];
+    expect(call?.status()).toBe('stopped');
   });

-  test('missing data field should throw', async () => {
+  test('missing data field should resolve without error and register connection as stopped', async () => {
     ...
     await expect(create({ 'openai.factory.apiKey': 'k', 'openai.factory.baseURL': 'http://x/v1' })).resolves.toBe(
       undefined,
     );
+    const call = vi.mocked(PROVIDER_MOCK.registerInferenceProviderConnection).mock.calls[0]?.[0];
+    expect(call?.status()).toBe('stopped');
   });

-  test('data not array should throw', async () => {
+  test('data not array should resolve without error and register connection as stopped', async () => {
     ...
     await expect(create({ 'openai.factory.apiKey': 'k', 'openai.factory.baseURL': 'http://x/v1' })).resolves.toBe(
       undefined,
     );
+    const call = vi.mocked(PROVIDER_MOCK.registerInferenceProviderConnection).mock.calls[0]?.[0];
+    expect(call?.status()).toBe('stopped');
   });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/openai-compatible/src/openAI.spec.ts` around lines 269 - 311,
Rename the three tests so names reflect they resolve (e.g., "non-200 response
should return undefined" etc.) and add assertions that verify the factory's
create path behaves correctly: after calling the create function (obtained from
PROVIDER_MOCK.setInferenceProviderConnectionFactory -> create) assert whether
PROVIDER_MOCK.registerInferenceProviderConnection was invoked and, if invoked,
that the returned connection's status() === 'stopped'; also assert
SECRET_STORAGE_MOCK.store was or was not called as appropriate to ensure
credentials are not silently persisted on failed model fetch. Use the OpenAI
init flow to obtain create, and reference create,
registerInferenceProviderConnection, SECRET_STORAGE_MOCK.store and the
connection.status() in your expects.
🧹 Nitpick comments (1)
extensions/openai-compatible/src/openAI.spec.ts (1)

156-185: Happy-path status() return value is not verified.

Line 181 only checks that call.status is a function. With the new locally-tracked ProviderConnectionStatus, the test should also assert the return value of call.status() to confirm the happy path yields a non-'stopped' status (e.g., 'started'). Otherwise there is no unit coverage distinguishing a successful from a failed connection at the status level.

✏️ Suggested assertion
     expect(call.status).toEqual(expect.any(Function));
+    expect(call.status()).not.toBe('stopped');
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extensions/openai-compatible/src/openAI.spec.ts` around lines 156 - 185, The
test currently only asserts that call.status is a function but doesn't verify
the returned ProviderConnectionStatus; update the happy-path test in
openAI.spec.ts (the test for create / registerInferenceProviderConnection) to
call call.status() and assert it returns a non-'stopped' value (for example
assert it equals 'started' or matches ProviderConnectionStatus.started) so the
successful connection status is covered; ensure you reference the same mocked
registration call
(vi.mocked(PROVIDER_MOCK.registerInferenceProviderConnection).mock.calls[0][0])
and add the assertion alongside the existing checks for call.name, call.sdk,
call.models and call.credentials().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@extensions/openai-compatible/src/openAI.spec.ts`:
- Around line 248-266: Rename the test 'should restore a single connection from
secret storage on init' to reflect that it expects two restored connections
(e.g., 'should restore multiple connections from secret storage on init'), and
add an assertion that the registration call for the failed provider shows a
stopped status: after creating OpenAI with SECRET_STORAGE_MOCK and calling
openai.init(), inspect the captured calls to
PROVIDER_MOCK.registerInferenceProviderConnection and assert the relevant call's
status() is 'stopped' (use the same call object reference from the mocked
registerInferenceProviderConnection result) to verify the PR's fix.
- Around line 269-311: Rename the three tests so names reflect they resolve
(e.g., "non-200 response should return undefined" etc.) and add assertions that
verify the factory's create path behaves correctly: after calling the create
function (obtained from PROVIDER_MOCK.setInferenceProviderConnectionFactory ->
create) assert whether PROVIDER_MOCK.registerInferenceProviderConnection was
invoked and, if invoked, that the returned connection's status() === 'stopped';
also assert SECRET_STORAGE_MOCK.store was or was not called as appropriate to
ensure credentials are not silently persisted on failed model fetch. Use the
OpenAI init flow to obtain create, and reference create,
registerInferenceProviderConnection, SECRET_STORAGE_MOCK.store and the
connection.status() in your expects.

---

Nitpick comments:
In `@extensions/openai-compatible/src/openAI.spec.ts`:
- Around line 156-185: The test currently only asserts that call.status is a
function but doesn't verify the returned ProviderConnectionStatus; update the
happy-path test in openAI.spec.ts (the test for create /
registerInferenceProviderConnection) to call call.status() and assert it returns
a non-'stopped' value (for example assert it equals 'started' or matches
ProviderConnectionStatus.started) so the successful connection status is
covered; ensure you reference the same mocked registration call
(vi.mocked(PROVIDER_MOCK.registerInferenceProviderConnection).mock.calls[0][0])
and add the assertion alongside the existing checks for call.name, call.sdk,
call.models and call.credentials().

@codecov-commenter
Copy link

codecov-commenter commented Feb 19, 2026

Codecov Report

❌ Patch coverage is 83.33333% with 1 line in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
extensions/openai-compatible/src/openAI.ts 83.33% 1 Missing ⚠️

📢 Thoughts on this report? Let us know!

@benoitf benoitf marked this pull request as draft March 6, 2026 12:47
@benoitf
Copy link
Contributor

benoitf commented Mar 6, 2026

moving to draft as it needs to solve conflicts

Signed-off-by: Jeff MAURY <jmaury@redhat.com>
@jeffmaury jeffmaury requested review from gastoner and removed request for SoniaSandler March 9, 2026 13:17
@jeffmaury jeffmaury marked this pull request as ready for review March 9, 2026 13:18
Signed-off-by: Jeff MAURY <jmaury@redhat.com>
@jeffmaury jeffmaury requested a review from MarsKubeX March 9, 2026 13:25
Copy link
Contributor

@MarsKubeX MarsKubeX left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. thanks for the fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

unable to remove an invalid OpenAI connection

4 participants