## Google Introduces Gemini 3 Flash as the Global Default Model
Google has officially launched Gemini 3 Flash, a fast and efficient AI model, promoting it as the primary option in the Gemini app and smart search feature. The move represents the next step in the company's strategy to solidify its position in the artificial intelligence market. This model significantly evolves from the Gemini 2.5 Flash version introduced six months prior, introducing substantial improvements in reasoning capabilities and multimodal performance.
## Outstanding Benchmark Performance
In comparative tests, Gemini 3 Flash demonstrates an impressive leap in quality compared to its predecessors. In the Humanity's Last Exam benchmark, which assesses competence across various fields, it achieved 33.7% without external tools. To put this into perspective: Gemini 3 Pro scored 37.5%, while Gemini 2.5 Flash only achieved 11%.
The true dominance emerges in the MMMU-Pro benchmark dedicated to multimodality and complex reasoning, where the new model surpassed competitors with a score of 81.2%. These metrics show that Google has not compromised advanced capabilities in the pursuit of speed and efficiency.
## Distribution and New Capabilities
In the Gemini app's speed selector, users will find Gemini 3 Flash as the default option globally, replacing the previous version. Those with specific needs, such as complex calculations or software development, can still select Gemini 3 Pro from the available choices menu.
The model excels in managing multimodal content. Users can upload videos for tailored advice, share sketches for system interpretation, or send audio recordings for detailed analysis or to generate educational material. Its improved understanding of the intent behind queries allows for richer responses, enhanced with visualizations, diagrams, and tables.
An interesting new feature involves creating application prototypes directly within the Gemini app via natural prompts, reducing prototyping time for developers and product designers.
## Cost Efficiency and Availability
The model is already in use by companies such as JetBrains, Figma, Cursor, Harvey, and Latitude, accessible via Vertex AI and Gemini Enterprise. For developers, Google offers early access via API and through Antigravity, the recently launched programming environment.
In the verified SWE-bench programming benchmark, Gemini 3 Pro scores 78%, a remarkable performance that places it at the top of the industry. The model excels in video analysis, structured data extraction, and answering complex visual questions, making it ideal for fast, iterative workflows.
Economically, the model is priced at $0.50 per 1 million input tokens and $3.00 per 1 million output tokens, slightly higher than the previous version's $0.30 and $2.50. However, Google highlights a crucial advantage: Gemini 3 Flash outperforms Gemini 2.5 Pro by operating at triple the speed. For reasoning tasks, it consumes on average 30% fewer tokens than the 2.5 Pro model, resulting in tangible savings on total token volume processed.
According to Tulsee Doshi, Senior Director & Head of Product for Gemini models: "We position Flash as the ultimate workhorse model. With lower input and output prices, it offers exceptional value for companies performing large-scale processing."
## A Constantly Evolving Market
Since Google introduced Gemini 3, the platform processes over 1 trillion tokens daily through its API infrastructure, demonstrating accelerated development and adoption. In the AI sector, competition among major players is driving frequent innovations and significant performance improvements.
Google states that this competitive dynamic benefits the entire ecosystem: "Models will continue to advance, challenging each other and pushing the boundaries of technology. At the same time, the development of new benchmarks and evaluation methodologies helps raise industry standards," Doshi commented.
The new model is already available to all users in the United States for search, while access to the Nano Banana Pro model for image generation is gradually expanding within the United States.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
## Google Introduces Gemini 3 Flash as the Global Default Model
Google has officially launched Gemini 3 Flash, a fast and efficient AI model, promoting it as the primary option in the Gemini app and smart search feature. The move represents the next step in the company's strategy to solidify its position in the artificial intelligence market. This model significantly evolves from the Gemini 2.5 Flash version introduced six months prior, introducing substantial improvements in reasoning capabilities and multimodal performance.
## Outstanding Benchmark Performance
In comparative tests, Gemini 3 Flash demonstrates an impressive leap in quality compared to its predecessors. In the Humanity's Last Exam benchmark, which assesses competence across various fields, it achieved 33.7% without external tools. To put this into perspective: Gemini 3 Pro scored 37.5%, while Gemini 2.5 Flash only achieved 11%.
The true dominance emerges in the MMMU-Pro benchmark dedicated to multimodality and complex reasoning, where the new model surpassed competitors with a score of 81.2%. These metrics show that Google has not compromised advanced capabilities in the pursuit of speed and efficiency.
## Distribution and New Capabilities
In the Gemini app's speed selector, users will find Gemini 3 Flash as the default option globally, replacing the previous version. Those with specific needs, such as complex calculations or software development, can still select Gemini 3 Pro from the available choices menu.
The model excels in managing multimodal content. Users can upload videos for tailored advice, share sketches for system interpretation, or send audio recordings for detailed analysis or to generate educational material. Its improved understanding of the intent behind queries allows for richer responses, enhanced with visualizations, diagrams, and tables.
An interesting new feature involves creating application prototypes directly within the Gemini app via natural prompts, reducing prototyping time for developers and product designers.
## Cost Efficiency and Availability
The model is already in use by companies such as JetBrains, Figma, Cursor, Harvey, and Latitude, accessible via Vertex AI and Gemini Enterprise. For developers, Google offers early access via API and through Antigravity, the recently launched programming environment.
In the verified SWE-bench programming benchmark, Gemini 3 Pro scores 78%, a remarkable performance that places it at the top of the industry. The model excels in video analysis, structured data extraction, and answering complex visual questions, making it ideal for fast, iterative workflows.
Economically, the model is priced at $0.50 per 1 million input tokens and $3.00 per 1 million output tokens, slightly higher than the previous version's $0.30 and $2.50. However, Google highlights a crucial advantage: Gemini 3 Flash outperforms Gemini 2.5 Pro by operating at triple the speed. For reasoning tasks, it consumes on average 30% fewer tokens than the 2.5 Pro model, resulting in tangible savings on total token volume processed.
According to Tulsee Doshi, Senior Director & Head of Product for Gemini models: "We position Flash as the ultimate workhorse model. With lower input and output prices, it offers exceptional value for companies performing large-scale processing."
## A Constantly Evolving Market
Since Google introduced Gemini 3, the platform processes over 1 trillion tokens daily through its API infrastructure, demonstrating accelerated development and adoption. In the AI sector, competition among major players is driving frequent innovations and significant performance improvements.
Google states that this competitive dynamic benefits the entire ecosystem: "Models will continue to advance, challenging each other and pushing the boundaries of technology. At the same time, the development of new benchmarks and evaluation methodologies helps raise industry standards," Doshi commented.
The new model is already available to all users in the United States for search, while access to the Nano Banana Pro model for image generation is gradually expanding within the United States.