Skip to content

[AI Agent] Model selector only shows gpt-5.1-mini and ignores the configured model #32

@trikunai

Description

@trikunai

Description

When the AI agent is enabled in the OpenCom web UI, the model selector is hardcoded to only show gpt-5.1-mini as an option. This causes two problems: the displayed model does not match the one actually being used by the backend, and users have no way to switch models through the UI.

Image

Bug 1 — Model selector is hardcoded

The model dropdown only shows gpt-5.1-mini regardless of what model is configured in Convex environment variables.

  • Steps to reproduce: Enable AI agent → open the model selector dropdown
  • Expected: Dropdown shows all available models fetched from the OpenAI API, with the currently configured one selected
  • Actual: Dropdown only shows gpt-5.1-mini

Bug 2 — Selected model does not match what's used

Even though the selector shows gpt-5.1-mini, the actual API calls are being made with a different model (gpt-5-nano, as confirmed via the OpenAI API usage dashboard).

  • The UI state is misleading and the Convex env var (OPENAI_MODEL or equivalent) is either being ignored or overridden by a hardcoded default
  • Any model selection made in the UI should be persisted and honoured by the backend

Proposed fixes

  1. On agent enable / settings open, fetch the model list dynamically from GET https://api.openai.com/v1/models and populate the selector with the results (optionally filtered to chat-capable models)
  2. Read the currently configured model from the Convex env var and pre-select it in the dropdown
  3. Persist the user-selected model back to the Convex env (or equivalent settings store) so the backend uses the correct one
// Example: fetch models and filter to chat-capable ones
const res = await fetch('https://api.openai.com/v1/models', {
  headers: { Authorization: `Bearer ${apiKey}` }
});
const { data } = await res.json();
const chatModels = data
  .filter(m => m.id.startsWith('gpt-'))
  .sort((a, b) => b.created - a.created);

Improvement — Support additional / custom OpenAI-compatible providers

Many users bring their own keys from third-party providers that expose an OpenAI-compatible API (e.g. Azure OpenAI, Groq, Together AI, local Ollama). The agent settings should support configuring a custom base URL alongside the API key, so these providers work out of the box without code changes.

Suggested settings fields:

  • Provider — dropdown: OpenAI, Custom (OpenAI-compatible)
  • Base URL — shown when "Custom" is selected; defaults to https://api.openai.com/v1
  • API key — existing field, unchanged
  • Model — dynamically fetched from whichever base URL is set

This would make the model selector reuse the same dynamic-fetch logic from Bug 1 fix — just pointed at the custom base URL instead of OpenAI's.


Environment

  • Interface: OpenCom web
  • Model shown in UI: gpt-5.1-mini
  • Model used in practice (per OpenAI dashboard): gpt-5-nano
  • API key configured via: Convex environment variables

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions