Skip to content

feat: add MiniMax as a supported LLM provider with M2.7 as default#815

Open
octo-patch wants to merge 2 commits intoCinnamon:mainfrom
octo-patch:feat/add-minimax-provider
Open

feat: add MiniMax as a supported LLM provider with M2.7 as default#815
octo-patch wants to merge 2 commits intoCinnamon:mainfrom
octo-patch:feat/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

@octo-patch octo-patch commented Mar 15, 2026

Summary

  • Add MiniMax as a supported LLM provider via OpenAI-compatible API
  • Include MiniMax-M2.7 (default), MiniMax-M2.7-highspeed, MiniMax-M2.5, and MiniMax-M2.5-highspeed models
  • Temperature clamping and response_format removal for MiniMax API compatibility
  • Register ChatMiniMax in vendor list for UI model selection

Changes

  • libs/kotaemon/kotaemon/llms/chats/minimax.py — ChatMiniMax class extending ChatOpenAI
  • libs/ktem/ktem/llms/manager.py — Register ChatMiniMax as vendor
  • flowsettings.py — Default MiniMax config with M2.7
  • .env.example — MINIMAX_API_KEY placeholder
  • README.md — MiniMax documentation with available models
  • libs/kotaemon/tests/test_llms_chat_models.py — Unit tests for MiniMax model, default verification, temp clamping, backward compatibility

Why

MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities.

Testing

  • Unit tests for model instantiation, temperature clamping, response_format removal
  • Default model verification (M2.7)
  • Backward compatibility test for M2.5 models

PR Bot and others added 2 commits March 15, 2026 08:58
Add native support for MiniMax large language models (MiniMax-M2.5,
MiniMax-M2.5-highspeed) via their OpenAI-compatible API endpoint.

Changes:
- Add ChatMiniMax class extending ChatOpenAI with MiniMax-specific
  defaults and parameter handling (temperature clamping, response_format
  removal)
- Register ChatMiniMax in LLMManager vendors for UI model selection
- Add default MiniMax configuration in flowsettings.py
- Add MINIMAX_API_KEY to .env.example
- Add unit test for ChatMiniMax
- Update README to mention MiniMax in supported providers

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to model list
- Set MiniMax-M2.7 as default model
- Keep all previous models as alternatives
- Update related tests
@octo-patch octo-patch changed the title feat: add MiniMax as a supported LLM provider feat: add MiniMax as a supported LLM provider with M2.7 as default Mar 18, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant