Skip to content

feat: add MiniMax as OpenAI-compatible LLM provider#350

Open
octo-patch wants to merge 1 commit intovercel:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as OpenAI-compatible LLM provider#350
octo-patch wants to merge 1 commit intovercel:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

  • Add MiniMax as a first-class OpenAI-compatible LLM provider
  • MiniMax offers high-performance models (M2.5, M2.7) with up to 1M token context windows via an OpenAI-compatible API at https://api.minimax.io/v1
  • Follows the existing provider pattern (FireworksAI, Perplexity, TogetherAI)

Changes

  • MiniMaxApiConfiguration.ts: New API configuration class extending BaseUrlApiConfigurationWithDefaults, uses MINIMAX_API_KEY env var
  • OpenAICompatibleFacade.ts: Added MiniMaxApi() factory function alongside existing providers
  • index.ts: Export MiniMaxApiConfiguration
  • MiniMaxApiConfiguration.test.ts: 8 unit tests covering API config, streaming, and text generation (using MSW mocks)
  • MiniMaxApiConfiguration.integration.test.ts: 3 integration tests (skipped without MINIMAX_API_KEY)
  • README.md: Added MiniMax to provider lists and usage example

Usage

import { generateText, openaicompatible } from "modelfusion";

const text = await generateText({
  model: openaicompatible
    .ChatTextGenerator({
      api: openaicompatible.MiniMaxApi(),
      model: "MiniMax-M2.5",
    })
    .withTextPrompt(),
  prompt: "Write a short story about a robot learning to love:",
});

Available models: MiniMax-M2.5, MiniMax-M2.5-highspeed (204K context), MiniMax-M2.7 (1M context)

Test plan

  • All 8 unit tests pass (API config validation, streaming text, non-streaming text generation)
  • All 161 existing tests continue to pass (no regressions)
  • Integration tests pass with valid MINIMAX_API_KEY (auto-skipped in CI)

Add MiniMax API configuration for using MiniMax models (M2.5, M2.7)
via the OpenAI-compatible interface. MiniMax provides high-performance
LLMs with up to 1M token context windows.

Changes:
- Add MiniMaxApiConfiguration class extending BaseUrlApiConfigurationWithDefaults
- Add MiniMaxApi() factory function to OpenAICompatibleFacade
- Export MiniMaxApiConfiguration from openai-compatible index
- Add 8 unit tests (API config, streaming, text generation)
- Add 3 integration tests (skipped without MINIMAX_API_KEY)
- Update README with MiniMax provider links and usage example
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant