feat(models): add configurable model ID format for GET /v1/models#333
feat(models): add configurable model ID format for GET /v1/models#333vfeitoza wants to merge 1 commit into
Conversation
Problem: The GET /v1/models endpoint returns model IDs in qualified format (provider/model, e.g. "anthropic/claude-sonnet-4-6"). OpenAI-compatible clients such as Claude CLI, Cursor, and other tools expect simple model IDs (e.g. "claude-sonnet-4-6"). When these clients fetch the model list to validate a requested model against available models, the qualified IDs never match the simple ID the client intends to send, resulting in "model not found" errors on the client side. The internal routing already resolves both qualified and unqualified selectors correctly — the problem is exclusively in the public listing response format. Solution: Add a new configuration option `models_endpoint_id_format` (env: MODELS_ENDPOINT_ID_FORMAT) that controls the model ID format returned by GET /v1/models. Supported values: - "qualified" (default): provider/model format (current behavior, no breaking change) - "unqualified": simple model ID only, with OwnedBy set to the provider name for disambiguation - "both": returns both qualified and unqualified entries, allowing clients that understand either format to work seamlessly Implementation: - config/models.go: ModelsEndpointIDFormat type with validation and normalization helpers following existing ConfiguredProviderModelsMode pattern - config/config.go: default value set to "qualified" - internal/providers/registry.go: ListModelsWithFormat() dispatches to ListPublicModels (qualified), listModelsUnqualified, or listModelsBoth. Unqualified deduplicates by model ID (first provider wins, matching routing behavior). Both merges qualified + unqualified with ID-based deduplication. - internal/providers/router.go: Router gains modelsEndpointIDFormat field and SetModelsEndpointIDFormat setter. ListModels uses the configured format via interface assertion on the lookup. - internal/app/app.go: wires config to Router after provider init - config/config.example.yaml, .env.template: documents the new option - internal/providers/router_test.go: TestRouterListModelsWithFormat covers all three format modes
📝 WalkthroughWalkthroughThis PR adds configurable model ID formatting to the ChangesModel ID Format Configuration
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Greptile SummaryThis PR adds a
Confidence Score: 3/5Safe to merge only after fixing the non-deterministic deduplication in listModelsUnqualified — the default qualified format is unaffected, but operators using unqualified or both modes will get inconsistent OwnedBy values across restarts. The new unqualified and both modes contain a real logic defect: listModelsUnqualified iterates r.modelsByProvider (a Go map) to deduplicate shared model IDs, so which provider wins and its associated OwnedBy value is random on every cold-path call. The routing layer uses r.models (deterministic, first-registered order), so the listing can claim a different owning provider than the one that would actually handle the request. The qualified default path is unaffected, keeping existing deployments safe. internal/providers/registry.go — specifically the listModelsUnqualified function and the dropped ModelCount doc comment. Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A["GET /v1/models"] --> B["Router.ListModels()"]
B --> C{lookup implements\nListModelsWithFormat?}
C -- yes --> D["ListModelsWithFormat(format)"]
C -- no --> E{lookup implements\nListPublicModels?}
E -- yes --> F["ListPublicModels()\n(qualified, legacy)"]
E -- no --> G["ListModels()\n(fallback)"]
D --> H{format}
H -- qualified --> I["ListPublicModels()\nprovider/model"]
H -- unqualified --> J["listModelsUnqualified()\nmodel only"]
H -- both --> K["listModelsBoth()\nqualified + unqualified"]
J --> L["⚠ dedup via modelsByProvider\n(non-deterministic map iteration)"]
K --> M["merge qualified list\nthen unqualified dedup"]
Reviews (1): Last reviewed commit: "feat(models): add configurable model ID ..." | Re-trigger Greptile |
| r.mu.RLock() | ||
| defer r.mu.RUnlock() | ||
|
|
||
| seen := make(map[string]struct{}) | ||
| result := make([]core.Model, 0, len(r.models)) | ||
| for providerName, models := range r.modelsByProvider { | ||
| for modelID, info := range models { | ||
| if _, exists := seen[modelID]; exists { | ||
| continue | ||
| } | ||
| seen[modelID] = struct{}{} | ||
| model := info.Model | ||
| model.ID = modelID | ||
| model.OwnedBy = providerName | ||
| result = append(result, model) | ||
| } | ||
| } | ||
| sort.Slice(result, func(i, j int) bool { return result[i].ID < result[j].ID }) | ||
| return result |
There was a problem hiding this comment.
Non-deterministic deduplication breaks "first provider wins" guarantee
listModelsUnqualified iterates over r.modelsByProvider, a Go map whose iteration order is randomized per run. When two providers share the same model ID (e.g., gpt-4o on both openai and openrouter), which provider "wins" the deduplication — and therefore what OwnedBy and model metadata are returned — is random. The PR description says this matches routing behavior, but routing uses r.models (populated in registration order, first-registered wins deterministically) as the source of truth. Using r.models here instead would give the correct, stable result: the same provider that routing would actually use.
| sort.Slice(result, func(i, j int) bool { return result[i].ID < result[j].ID }) | ||
| return result | ||
| } | ||
| func (r *ModelRegistry) ModelCount() int { |
There was a problem hiding this comment.
Missing blank line and doc comment between
listModelsBoth and ModelCount. The original // ModelCount returns the number of registered models comment was dropped when the new functions were inserted.
| sort.Slice(result, func(i, j int) bool { return result[i].ID < result[j].ID }) | |
| return result | |
| } | |
| func (r *ModelRegistry) ModelCount() int { | |
| sort.Slice(result, func(i, j int) bool { return result[i].ID < result[j].ID }) | |
| return result | |
| } | |
| // ModelCount returns the number of registered models | |
| func (r *ModelRegistry) ModelCount() int { |
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
|
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@config/config.go`:
- Line 65: Add validation and normalization for ModelsEndpointIDFormat inside
the Load() function similar to how ConfiguredProviderModelsMode is handled:
after resolving the field, check that the value is one of the allowed constants
(e.g., ModelsEndpointIDFormatQualified and any other valid enum values),
normalize any synonyms if needed, and return an error if the value is invalid
instead of silently defaulting; update references to ModelsEndpointIDFormat and
ensure the defaulting code (which currently sets ModelsEndpointIDFormat:
ModelsEndpointIDFormatQualified) does not hide invalid user input by performing
this validation/normalization in Load().
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 833c32cd-18d6-4afc-952a-59421486093b
📒 Files selected for processing (8)
.env.templateconfig/config.example.yamlconfig/config.goconfig/models.gointernal/app/app.gointernal/providers/registry.gointernal/providers/router.gointernal/providers/router_test.go
| OverridesEnabled: true, | ||
| KeepOnlyAliasesAtModelsEndpoint: false, | ||
| ConfiguredProviderModelsMode: ConfiguredProviderModelsModeFallback, | ||
| ModelsEndpointIDFormat: ModelsEndpointIDFormatQualified, |
There was a problem hiding this comment.
Add validation for ModelsEndpointIDFormat in Load().
ConfiguredProviderModelsMode is both resolved and validated in Load() (lines 169–172), but ModelsEndpointIDFormat is only defaulted here with no corresponding normalization or validation in Load(). This inconsistency means invalid user-supplied values will silently fall back to the default instead of producing a clear configuration error.
Proposed fix to add validation after line 172
cfg.Models.ConfiguredProviderModelsMode = ResolveConfiguredProviderModelsMode(cfg.Models.ConfiguredProviderModelsMode)
if !cfg.Models.ConfiguredProviderModelsMode.Valid() {
return nil, fmt.Errorf("models.configured_provider_models_mode must be one of: fallback, allowlist")
}
+cfg.Models.ModelsEndpointIDFormat = ResolveModelsEndpointIDFormat(cfg.Models.ModelsEndpointIDFormat)
+if !cfg.Models.ModelsEndpointIDFormat.Valid() {
+ return nil, fmt.Errorf("models.models_endpoint_id_format must be one of: qualified, unqualified, both")
+}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@config/config.go` at line 65, Add validation and normalization for
ModelsEndpointIDFormat inside the Load() function similar to how
ConfiguredProviderModelsMode is handled: after resolving the field, check that
the value is one of the allowed constants (e.g., ModelsEndpointIDFormatQualified
and any other valid enum values), normalize any synonyms if needed, and return
an error if the value is invalid instead of silently defaulting; update
references to ModelsEndpointIDFormat and ensure the defaulting code (which
currently sets ModelsEndpointIDFormat: ModelsEndpointIDFormatQualified) does not
hide invalid user input by performing this validation/normalization in Load().
Problem:
The GET /v1/models endpoint returns model IDs in qualified format (provider/model, e.g. "anthropic/claude-sonnet-4-6"). OpenAI-compatible clients such as Claude CLI, Cursor, and other tools expect simple model IDs (e.g. "claude-sonnet-4-6"). When these clients fetch the model list to validate a requested model against available models, the qualified IDs never match the simple ID the client intends to send, resulting in "model not found" errors on the client side.
The internal routing already resolves both qualified and unqualified selectors correctly — the problem is exclusively in the public listing response format.
Solution:
Add a new configuration option
models_endpoint_id_format(env: MODELS_ENDPOINT_ID_FORMAT) that controls the model ID format returned by GET /v1/models. Supported values:Implementation:
Summary by CodeRabbit
MODELS_ENDPOINT_ID_FORMATenvironment variable or configuration file.