MiniMax M2.1: Significantly Enhanced Multi-Language Programming, Built for Real-World Complex Tasks

MiniMax has been continuously transforming itself in a more AI-native way. The core driving forces of this process are models, Agent scaffolding, and organization. Throughout the exploration process, we have gained increasingly deeper understanding of these three aspects. Today we are releasing updates to the model component, namely MiniMax M2.1, hoping to help more enterprises and individuals find more AI-native ways of working (and living) sooner.
In M2, we primarily addressed issues of model cost and model accessibility. In M2.1, we are committed to improving performance in real-world complex tasks: focusing particularly on usability across more programming languages and office scenarios, and achieving the best level in this domain.
- Exceptional Multi-Programming Language Capabilities
Many models in the past primarily focused on Python optimization, but real-world systems are often the result of multi-language collaboration.
In M2.1, we have systematically enhanced capabilities in Rust, Java, Golang, C++, Kotlin, Objective-C, TypeScript, JavaScript, and other languages. The overall performance on multi-language tasks has reached industry-leading levels, covering the complete chain from low-level system development to application layer development. - WebDev and AppDev: A Comprehensive Leap in Capability and Aesthetics
Addressing the widely recognized weakness in mobile development across the industry, M2.1 significantly strengthens native Android and iOS development capabilities.
Meanwhile, we have systematically enhanced the model's design comprehension and aesthetic expression in Web and App scenarios, enabling excellent construction of complex interactions, 3D scientific scene simulations, and high-quality visualization, making vibe coding a sustainable and deliverable production practice. - Enhanced Composite Instruction Constraints, Enabling Office Scenarios
As one of the first open-source model series to systematically introduce Interleaved Thinking, M2.1's systematic problem-solving capabilities have been further upgraded. The model not only focuses on code execution correctness but also emphasizes integrated execution of "composite instruction constraints," providing higher usability in real office scenarios. - More Concise and Efficient Responses
Compared to M2, MiniMax-M2.1 delivers more concise model responses and thought chains. In practical programming and interaction experiences, response speed has significantly improved and token consumption has notably decreased, resulting in smoother and more efficient performance in AI Coding and Agent-driven continuous workflows. - Outstanding Agent/Tool Scaffolding Generalization Capabilities
M2.1 demonstrates excellent performance across various programming tools and Agent frameworks. It exhibits consistent and stable results in tools such as Claude Code, Droid (Factory AI), Cline, Kilo Code, Roo Code, and BlackBox, while providing reliable support for Context Management mechanisms including Skill.md, Claude.md/agent.md/cursorrule, and Slash Commands. - High-Quality Dialogue and Writing
M2.1 is no longer just "stronger in coding capabilities." In everyday conversation, technical documentation, and writing scenarios, it also provides more detailed and structured responses.
First Impressions
"We're excited for powerful open-source models like M2.1 that bring frontier performance (and in some cases exceed the frontier) for a wide variety of software development tasks. Developers deserve choice, and M2.1 provides that much needed choice!"
Eno Reyes
Co-Founder, CTO of Factory AI
“MiniMax M2.1 performed exceptionally well across our internal benchmarks, showing strong results in complex instruction following, reranking, and classification, especially within e-commerce tasks. Beyond its general versatility, it has proven to be an excellent model for coding. We are impressed by these results and look forward to a close collaboration with the MiniMax team as we continue to support their latest innovations on the Fireworks platform.”
Benny Chen
Co-founder of Fireworks
“Minimax M2 series has demonstrated powerful code generation capability, and has quickly became one of the most popular model on Cline platform during the past few months. We already see another huge advancement in capability for M2.1 and very excited to continue partner with minimax team to advance AI in coding”
Saoud Rizwan
Founder, CEO of Cline
“We could not be more excited about M2.1! Our users have come to rely on MiniMax for frontier-grade coding assistance at a fraction of the cost, and early testing shows M2.1 excelling at everything from architecture and orchestration to code reviews and deployment. The speed and efficiency are off the charts!”
Scott Breitenother
Co-Founder, CEO of Kilo
"Our users love MiniMax M2 for its strong coding ability and efficiency. The latest M2.1 release builds on that foundation with meaningful improvements in speed and reliability, performing well across a wider range of languages and frameworks. It's a great choice for high-throughput, agentic coding workflows where speed and affordability matter."
Matt Rubens
Co-Founder, CEO of RooCode
“Integrating the MiniMax M2 series into our platform has been a significant win for our users, and M2.1 represents a clear step forward in what a coding-specific model can achieve. We’ve found that M2.1 handles the nuances of complex, multi-step programming tasks with a level of consistency that is rare in this space. By providing high-quality reasoning and context awareness at scale, MiniMax has become a core component of how we help developers solve challenging problems faster. We look forward to seeing how our community continues to leverage these updated capabilities.”
Robert Rizk
Co-Founder, CEO of BlackBox AI
Benchmarks

Furthermore, across specific benchmarks—including test case generation, code performance optimization, code review, and instruction following—MiniMax-M2.1 demonstrates comprehensive improvements over M2. In these specialized domains, it consistently matches or exceeds the performance of Claude Sonnet 4.5.

MiniMax-M2.1 delivers outstanding performance on the VIBE aggregate benchmark, achieving an average score of 88.6—demonstrating robust full-stack development capabilities. It excels particularly in the VIBE-Web (91.5) and VIBE-Android (89.7) subsets.


Showcases
Multilingual Coding
Agentic Tool Use
Digital Employee
End-to-End Office Automation
Local Deployment Guide
We recommend using the following inference frameworks (listed alphabetically) to serve the model:
SGLang
We recommend using SGLang to serve MiniMax-M2.1. Please refer to our SGLang Deployment Guide.
Transformers
We recommend using Transformers to serve MiniMax-M2.1. Please refer to our Transformers Deployment Guide.
Inference Parameters
We recommend using the following parameters for best performance:temperature=1.0,top_p = 0.95,top_k = 40
How to Use
- The MiniMax-M2.1 API is now live on the MiniMax Open Platform: https://platform.minimax.io/docs/guides/text-generation
- Our product MiniMax Agent, built on MiniMax-M2.1, is now publicly available: https://agent.minimax.io/
- The MiniMax-M2.1 model weights are now open-source, allowing for local deployment and use: https://huggingface.co/MiniMaxAI/MiniMax-M2.1
Contact Us
Business Cooperation: [email protected]
MiniMax X: https://x.com/MiniMax__AI
MiniMax LinkedIn: https://www.linkedin.com/company/81521159
MiniMax Discord: https://discord.gg/minimax






