Suggestion to not just allow different models and providers, but to enable a config where for each model you can set separately:
model_name e.g. gpt-5.4-2026-03-05
api_type - openai-completions, anthropic-messages, google-generative-ai
base_URL - default to whatever URL corresponds to the api_type, i.e. for API openai-completions, the URL would be https://api.openai.com/v1/chat/completions; but alternatives are now possible: e.g. https://openrouter.ai/api/v1,https://api.wisgate.ai/v1beta, etc.
api_key - plaintext for first iteration
max_context_window (so it doesn't try to send more to the model than it can take)
max_tokens (to control output)
reasoning - if the model allows true or false
Allow several to exist in a json config file; and then one (for now) to pick as the active one.
Suggestion to not just allow different models and providers, but to enable a config where for each model you can set separately:
model_namee.g.gpt-5.4-2026-03-05api_type-openai-completions,anthropic-messages,google-generative-aibase_URL- default to whatever URL corresponds to theapi_type, i.e. for APIopenai-completions, the URL would behttps://api.openai.com/v1/chat/completions; but alternatives are now possible: e.g.https://openrouter.ai/api/v1,https://api.wisgate.ai/v1beta, etc.api_key- plaintext for first iterationmax_context_window(so it doesn't try to send more to the model than it can take)max_tokens(to control output)reasoning- if the model allowstrueorfalseAllow several to exist in a json config file; and then one (for now) to pick as the active one.