Google, Gemini and XR glasses
Digest more
Google's AI Mode will soon do the 'tedious' part of search
Digest more
On Tuesday at Google I/O 2025, the company announced Deep Think, an “enhanced” reasoning mode for its flagship Gemini 2.5 Pro model. Deep Think allows the model to consider multiple answers to questions before responding, boosting its performance on certain benchmarks.
Google says the release version of 2.5 Flash is better at reasoning, coding, and multimodality, but it uses 20–30 percent fewer tokens than the preview version. This edition is now live in Vertex AI, AI Studio, and the Gemini app. It will be made the default model in early June.
Welcome to our Google IO 2025 live blog, where we’re bringing you all the latest from the search giant’s opening keynote at the Shoreline Amphitheater in Mountain View, California.
Explore more
Google is moving closer to its goal of autonomous agentic AI with a series of enhancements to Gemini 2.5 Pro and Flash.
At its I/O developer conference today, Google announced two new ways to access its AI-powered “Live” mode, which lets users search for and ask about anything they can point their camera at. The feature will arrive in Google Search as part of its expanded AI Mode and is also coming to the Gemini app on iOS,
Android 16 will complete the replacement of the Google Assistant with Gemini Live, the no-subscription-needed chatbot offshoot of its Gemini AI platform. It brings that AI assistant to devices beyond phones, and not just to the watches that you might expect, but also to cars, TVs, and extended-reality headsets. Here's what you need to know.
Google’s AI models are learning to reason, wield agency, and build virtual models of the real world. The company’s AI lead, Demis Hassabis, says all this—and more—will be needed for true AGI.