New🚀 230B MoE Model with 204K Context - Open Source!

MiniMax-M2: Advanced AI for Coding and Agent Workflows

MiniMax-M2 is a powerful 230B parameter MoE (Mixture of Experts) AI model designed specifically for coding and intelligent agent workflows. With its massive 204K context window and exceptional programming capabilities, it delivers enterprise-grade performance while maintaining cost efficiency. Released under Apache 2.0 license, it's completely open-source and ready for commercial use.

What is MiniMax-M2

MiniMax-M2 is a breakthrough 230 billion parameter AI model built with Mixture of Experts (MoE) architecture, activating only 10 billion parameters at a time for maximum efficiency. This innovative design delivers exceptional performance in coding, agent workflows, and general intelligence tasks while using significantly fewer computational resources than traditional models. With an unprecedented 204K token context window and 131K maximum output capacity, it can handle complex multi-file projects and long-form code generation with ease. Released under the Apache 2.0 license, MiniMax-M2 is completely open-source and commercially friendly, making advanced AI accessible to developers and businesses worldwide.

  • Advanced Programming Intelligence
    Built specifically for developers, MiniMax-M2 excels at code generation, multi-file editing, debugging workflows, and test-driven development with industry-leading accuracy.
  • Massive Context Understanding
    With 204K token context window, it can process entire codebases, long documents, and complex project structures while maintaining coherent understanding throughout.
  • Cost-Effective Architecture
    MoE design activates only 10B of 230B parameters per task, delivering superior performance at just 8% the cost of comparable models.

Key Features of MiniMax-M2

Discover the powerful capabilities that make MiniMax-M2 the ideal choice for modern development workflows.

Mixture of Experts Architecture

Advanced MoE design with 230B total parameters and 10B active parameters, delivering maximum performance with minimal computational overhead for cost-effective AI solutions.

Ultra-Large Context Window

Industry-leading 204K token context window allows processing of entire codebases, complex documentation, and multi-file projects without losing important context.

Superior Coding Capabilities

Optimized for programming tasks including code generation, multi-file editing, compile-run-fix loops, debugging, and test validation with exceptional accuracy.

Intelligent Agent Workflows

Designed for complex agentic tasks with tool integration, seamless workflow automation, and the ability to handle multi-step problem-solving processes.

Open Source Freedom

Released under Apache 2.0 license, providing complete freedom for commercial use, modification, and distribution without licensing restrictions or fees.

Exceptional Performance Efficiency

Ranks #1 among global open-source models while using only 8% of the computational cost compared to similar-sized traditional models.

What People Are Talking About MiniMax-M2 on X

Join the conversation about MiniMax-M2 and share your experience with the developer community

FAQ