
The abab6.5 series includes two models: abab6.5 and abab6.5s. Abab6.5 features a trillion parameters and supports a context length of 200k tokens; abab6.5s uses the same training techniques and data as abab6.5 but is more efficient, supporting a 200k token context length and capable of processing nearly 30,000 words within a second. With significant cost advantages, an industry-leading context length and fast speed, MiniMax's abab6.5 series of LLMs offer unique value propositions.