RT @dongxi_nlp: After reading the Gemini 3 model card, you will find that Gemini 3 is definitely not a fine-tuning of Gemini 2.5; it is a completely new trained sparse MoE. In other words, after Gemini 2.5's already excellent RL post-training and parallel...
Loading thread detail
Fetching the original tweets from X for a clean reading view.
Hang tight—this usually only takes a few seconds.