Google Releases Major Update to Gemini 3 Deep Think, Achieving 84.6% on ARC-AGI-2 Test, Surpassing Claude Opus 4.6 (68.8%) and GPT-5.2 (52.9%), While Reaching “Legendary Master” Level on Codeforces.
(Background recap: The emergence of ChatGPT learning mode: dusk of tutoring, or dawn of the golden education era?)
(Additional context: Google officially launches “Gemini 3”! What are the highlights of topping the world’s smartest AI models?)
Table of Contents
Today (13th), Google announced a major upgrade to Gemini 3 Deep Think. In the ARC-AGI-2 reasoning test—designed specifically to prevent AI from memorizing answer banks and to assess whether AI can infer rules from examples—Gemini 3 Deep Think scored 84.6%.
For comparison, Claude Opus 4.6 (Thinking Max mode) scored 68.8%, GPT-5.2 (Thinking xhigh mode) scored 52.9%, and the human average is around 60%.
Even more impressively, on the original ARC-AGI-1 test, Deep Think achieved 96%, essentially hitting the ceiling of what was once considered “one of the hardest AI exams.”
Deep Think is currently available to Google AI Ultra subscribers, with API early access open to enterprise users.
Beyond the scores, Google highlighted a detail: when reviewing a peer-reviewed mathematical paper, Deep Think successfully identified a logical flaw that all previous reviewers had missed. This paper was verified by mathematicians from Rutgers University.
The significance of this case lies in the fact that it demonstrates the model’s ability not just in standardized tests, but in real, open scientific scenarios. Peer review is a core quality control mechanism in academia. If AI can reliably provide valuable assistance in this process, its potential to accelerate scientific research far exceeds what scores alone can measure.
Deep Think also reached gold medal levels in the 2025 International Physics Olympiad and Chemistry Olympiad written exams, with an Elo rating of 3,455 on Codeforces—equivalent to a “Legendary Master” level, a tier only a very few human programmers worldwide can attain.
On “Humanity’s Last Exam,” a benchmark designed by experts across fields to be deliberately challenging for AI, Deep Think scored 48.4% (without tools), setting a new record.
The tech race among the three AI giants is reshaping the market landscape. ChatGPT’s market share has dropped from its peak of 87% to about 68%, while Gemini has surged from under 5% to over 18%, with Anthropic’s Claude steadily nibbling away at the enterprise market.
Google’s unique advantage in this race is its distribution capability. Gemini is integrated into Android, Chrome, Google Workspace, and Search, meaning even if its capabilities are on par with competitors, Google can leverage its channels to attract users.
However, distribution is a double-edged sword. If Gemini’s user experience isn’t compelling enough, it could lose user trust faster than any competitor, because users are “passively exposed” rather than “actively choosing.” OpenAI’s users are paying customers, inherently more tolerant and sticky.
Every upgrade in the AI arms race drives up demand for computing infrastructure. The cost to train cutting-edge models has ballooned from hundreds of millions of dollars in 2024 to billions by 2026. This directly impacts two areas:
First, the transformation path for Bitcoin miners. As mining profits are squeezed (JPM estimates BTC production costs have dropped to $77,000, with prices around $66,000), large-scale mining operations are accelerating their shift toward AI computing services.
High-cost mining firms aren’t “quitting,” but “diversifying,” moving from Bitcoin mining to providing AI compute contracts.
Second, the narrative around AI tokens. Whenever Google, OpenAI, or Anthropic releases major upgrades, on-chain AI-related tokens (such as decentralized compute protocols) often experience short-term speculation.
But the fundamental issues remain unchanged: decentralized computing still lags far behind enterprise-level AI training in latency and throughput. The narrative can run fast, but infrastructure can’t keep pace with the story.
Deep Think’s upgrade has pushed Google back to the forefront of AI competition, at least in reasoning and scientific domains. But a subtle shift in Google’s wording reveals a change in positioning: it no longer emphasizes “the smartest general AI,” but repeatedly highlights “born for science.”
As benchmarks for general AI become more crowded and differentiation harder, “my AI can help you do science” is a more compelling value proposition than “my AI scores the highest.” If Deep Think can reliably assist peer review, accelerate drug discovery, or find overlooked solutions in physical simulations, that’s more meaningful than any leaderboard ranking.
The challenge is that the gap between “scoring high on benchmarks” and “reliably assisting humans in real scientific scenarios” may be larger than Google hints. Benchmarks have clear answers; science does not.
Related Articles
The three major U.S. stock indices all rose before the market opened, with Microsoft (MSFT) up 0.38%.
Analysis: De-globalization and AI reshaping the macro environment, with crypto assets being sold off as high-beta growth assets
The stablecoin legislation is about to be implemented, and mainstream CEX stablecoin revenue could surge up to 7 times
Capital withdrawal accelerates! US Bitcoin and Ethereum ETFs see over $250 million in net outflows in a single day, with institutional sentiment clearly weakening.
Bank of Canada sells 1.04 million MSTR shares, worth approximately $128 million
Is the Yen's rebound hopeless? Sanae Takaichi persuades Kazuo Ueda "not to raise interest rates," BOJ may hold steady in March