Missing models
Two new provider families — MiniMax and Z.AI — launched on Amazon Bedrock in March 2026 but have zero representation in the model catalog (packages/proxy/schema/model_list.json). No minimax.* or zai.* entries exist.
Both models were announced together in the same AWS What's New post (March 18, 2026).
Model details
| Field |
minimax.minimax-m2.5 |
zai.glm-5 |
| Format |
converse |
converse |
| Flavor |
chat |
chat |
| Multimodal |
false |
false |
| Input $/MTok |
$0.30 |
$1.00 |
| Output $/MTok |
$1.20 |
$3.20 |
| Max input tokens |
196000 |
200000 |
| Max output tokens |
98000 |
128000 |
| Available providers |
["bedrock"] |
["bedrock"] |
| Display name |
MiniMax M2.5 |
GLM 5 |
Verification checklist
Verification notes
| Field |
Source |
| Model IDs |
Bedrock model cards — Programmatic Access tables |
| Context windows, max output |
Bedrock model cards — Model Details sections |
| GA availability (March 2026) |
AWS What's New |
| Pricing (MiniMax: $0.30/$1.20, GLM: $1.00/$3.20) |
Bedrock pricing page and third-party confirmation (pricepertoken.com) |
Fields NOT verified from official sources:
reasoning — Neither model card explicitly states reasoning token support. MiniMax M2.5 mentions "token-efficient reasoning" in its description but this may refer to general capabilities, not a dedicated reasoning mode.
- Both models also have
bedrock-mantle Chat Completions API endpoints. The downstream fix job should consider whether separate entries are needed.
{
"kind": "missing_model",
"provider": "bedrock",
"models": ["minimax.minimax-m2.5", "zai.glm-5"],
"status": "active",
"model_specs": {
"minimax.minimax-m2.5": {
"format": "converse",
"flavor": "chat",
"multimodal": false,
"input_cost_per_mil_tokens": 0.3,
"output_cost_per_mil_tokens": 1.2,
"displayName": "MiniMax M2.5",
"max_input_tokens": 196000,
"max_output_tokens": 98000,
"available_providers": ["bedrock"]
},
"zai.glm-5": {
"format": "converse",
"flavor": "chat",
"multimodal": false,
"input_cost_per_mil_tokens": 1.0,
"output_cost_per_mil_tokens": 3.2,
"displayName": "GLM 5",
"max_input_tokens": 200000,
"max_output_tokens": 128000,
"available_providers": ["bedrock"]
}
},
"source_urls": [
"https://docs.aws.amazon.com/bedrock/latest/userguide/model-card-minimax-minimax-m2-5.html",
"https://docs.aws.amazon.com/bedrock/latest/userguide/model-card-zai-glm-5.html",
"https://aws.amazon.com/about-aws/whats-new/2026/03/amazon-bedrock-minimax-glm/",
"https://aws.amazon.com/bedrock/pricing/"
]
}
Missing models
Two new provider families — MiniMax and Z.AI — launched on Amazon Bedrock in March 2026 but have zero representation in the model catalog (
packages/proxy/schema/model_list.json). Nominimax.*orzai.*entries exist.Both models were announced together in the same AWS What's New post (March 18, 2026).
Model details
minimax.minimax-m2.5zai.glm-5converseconversechatchatfalsefalse$0.30$1.00$1.20$3.2019600020000098000128000["bedrock"]["bedrock"]MiniMax M2.5GLM 5Verification checklist
minimax.*orzai.*entries exist inmodel_list.json.provider.model-name), consistent with existing entries likeamazon.nova-pro-v1:0.Verification notes
Fields NOT verified from official sources:
reasoning— Neither model card explicitly states reasoning token support. MiniMax M2.5 mentions "token-efficient reasoning" in its description but this may refer to general capabilities, not a dedicated reasoning mode.bedrock-mantleChat Completions API endpoints. The downstream fix job should consider whether separate entries are needed.{ "kind": "missing_model", "provider": "bedrock", "models": ["minimax.minimax-m2.5", "zai.glm-5"], "status": "active", "model_specs": { "minimax.minimax-m2.5": { "format": "converse", "flavor": "chat", "multimodal": false, "input_cost_per_mil_tokens": 0.3, "output_cost_per_mil_tokens": 1.2, "displayName": "MiniMax M2.5", "max_input_tokens": 196000, "max_output_tokens": 98000, "available_providers": ["bedrock"] }, "zai.glm-5": { "format": "converse", "flavor": "chat", "multimodal": false, "input_cost_per_mil_tokens": 1.0, "output_cost_per_mil_tokens": 3.2, "displayName": "GLM 5", "max_input_tokens": 200000, "max_output_tokens": 128000, "available_providers": ["bedrock"] } }, "source_urls": [ "https://docs.aws.amazon.com/bedrock/latest/userguide/model-card-minimax-minimax-m2-5.html", "https://docs.aws.amazon.com/bedrock/latest/userguide/model-card-zai-glm-5.html", "https://aws.amazon.com/about-aws/whats-new/2026/03/amazon-bedrock-minimax-glm/", "https://aws.amazon.com/bedrock/pricing/" ] }