Skip to content

feat: add MiniMax as first-class LLM provider#136

Open
octo-patch wants to merge 1 commit intoPaperDebugger:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as first-class LLM provider#136
octo-patch wants to merge 1 commit intoPaperDebugger:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

Summary

Add MiniMax M2.7 and M2.7-highspeed as first-class LLM provider options in PaperDebugger. MiniMax offers 1M context window and 128K output at competitive pricing through an OpenAI-compatible API.

Changes

  • Model registry: Add MiniMax M2.7 ($1.00/$4.00 per 1M tokens) and M2.7-highspeed ($0.50/$2.00) to allModels with dual-slug support (OpenRouter + direct API)
  • Provider routing: Route MiniMax models to MiniMax API when MINIMAX_API_KEY is configured; falls back to OpenRouter/inference endpoint otherwise
  • Configuration: Add MINIMAX_API_KEY and MINIMAX_BASE_URL environment variables (default: https://api.minimax.io/v1)
  • Temperature clamping: Enforce MiniMax temperature range [0.0, 1.0]
  • Documentation: Update README.md and DEVELOPMENT.md with MiniMax provider info

How it works

  1. Via OpenRouter (default): MiniMax models work out of the box through the existing inference/OpenRouter endpoint - no additional configuration needed
  2. Via direct MiniMax API (optional): Set MINIMAX_API_KEY env var to route MiniMax models directly to api.minimax.io/v1 for potentially lower latency and cost

Files changed (14 files, 661 additions)

File Change
internal/api/chat/list_supported_models_v2.go Add MiniMax models + slug selection logic
internal/services/toolkit/client/client_v2.go MiniMax endpoint routing in GetOpenAIClient()
internal/services/toolkit/client/completion_v2.go Set model name for provider-aware routing
internal/services/toolkit/client/utils_v2.go Temperature clamping for MiniMax
internal/libs/cfg/cfg.go Add MiniMax config fields
internal/models/llm_provider.go Add IsMiniMaxModel() helper
README.md Mention MiniMax in architecture overview
docs/DEVELOPMENT.md Document MiniMax env vars

Test plan

  • 15 unit tests: model detection, config, registry validation, slug uniqueness, pricing
  • 12 unit tests: temperature clamping, tools support, routing logic, params
  • 4 integration tests: provider routing with/without server key, user key scenarios
  • go build ./... passes
  • All existing tests still pass

Add MiniMax M2.7 and M2.7-highspeed models to PaperDebugger's model
registry with full OpenAI-compatible API support. Models are available
via OpenRouter (default) and optionally via direct MiniMax API when
MINIMAX_API_KEY is configured.

Changes:
- Add MiniMax M2.7/M2.7-highspeed to allModels registry (1M context, 128K output)
- Add MINIMAX_API_KEY/MINIMAX_BASE_URL env var config
- Route MiniMax models to MiniMax API when server key is configured
- Temperature clamping for MiniMax models (max 1.0)
- Set model name on LLMProviderConfig for provider-aware routing
- Update README and DEVELOPMENT.md with MiniMax documentation
- 27 unit tests + 4 integration tests across 5 test files
@Junyi-99 Junyi-99 requested a review from kah-seng March 24, 2026 16:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant