DeepSeek's flagship mixture-of-experts (MoE) model, matching GPT-4o performance with extreme efficiency.