News

We break down China’s new open-source reasoning model, MiniMax-M1: real benchmarks, hidden tradeoffs, and how it stacks up ...
Every few months, an AI lab in China that most people in the U.S. have never heard of releases an AI model that upends conventional wisdom about the cost of training and running cutting-edge AI. In ...
MiniMax-M1 presents a flexible option for organizations looking to experiment with or scale up advanced AI capabilities while managing costs.
MiniMax-M1 was released Monday under an Apache software license, and thus is actually open source, unlike Meta's Llama family, offered under a community license that's not open source, and DeepSeek, ...
The Shanghai-based company said its new MiniMax-M1 model delivers a knockout punch to computational inefficiency, requiring just 30% of the computing power needed by rival DeepSeek’s R1 model ...
We introduce MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model. MiniMax-M1 is powered by a hybrid Mixture-of-Experts (MoE) architecture combined with a lightning ...
The MiniMax-M1 model supports function calling capabilities, enabling the model to identify when external functions need to be called and output function call parameters in a structured format.
Chinese AI company MiniMax has released a new AI model called M1 that it says equals the performance of top models from labs such as OpenAI, Anthropic, and Google DeepMind, but was trained at a ...