Google, Live Updates
Digest more
On Tuesday at Google I/O 2025, the company announced Deep Think, an “enhanced” reasoning mode for its flagship Gemini 2.5 Pro model. Deep Think allows the model to consider multiple answers to questions before responding, boosting its performance on certain benchmarks.
2h
CNET on MSNGoogle Announces AR Glasses, More Gemini in Chrome, 3D Conferencing and Tons More at Google I/OFrom its new Project Aura XR glasses to Chrome's wants-to-be-more-helpful AI mode, Gemini Live and new Flow generative video tool, Google puts AI everywhere.
Specifically, Google says AI Mode will be able to answer “your toughest questions” and can be used to go more in-depth, asking follow-ups and providing “helpful web links.” AI Mode is based on a custom version of Gemini 2.
Google says the release version of 2.5 Flash is better at reasoning, coding, and multimodality, but it uses 20–30 percent fewer tokens than the preview version. This edition is now live in Vertex AI, AI Studio, and the Gemini app. It will be made the default model in early June.
Explore more
During its Google I/O 2025 keynote on Tuesday, Google tossed around the Gemini name nonstop, to no one's surprise. It also spent some time talking about something called Project Astra, a key part of its visual AI technology.
In a series of video demos, Google showed off how people wearing Android XR glasses might interact with apps such as Google Maps. A user asked their glasses’ Gemini AI chatbot for directions, and the device brought up a small hologram-like map at the bottom of the internal display.
Google’s AI models are learning to reason, wield agency, and build virtual models of the real world. The company’s AI lead, Demis Hassabis, says all this—and more—will be needed for true AGI.
This is our ultimate vision for the Gemini app, to transform it into a universal AI assistant, an AI that’s personal, proactive and powerful, and one of our key milestones on the road to AGI, ” CEO of Google Deepmind Demis Hassabis said while onstage.