Google I/O 2025: AI Evolution and New Tools That Matter
Google's latest AI innovations from I/O 2025, including Deep Think mode, enhanced search, and new creative tools that are changing how we work with AI
Hey!
Yesterday and today, Google held their annual I/O conference where they presented their latest innovations and breakthroughs. It would be a shame if I didn’t update you with the exciting news.
What did I notice?
That they aren’t just showing off new features; they’re fundamentally changing how AI thinks and works. Let’s break down the most exciting parts.
The Evolution of AI Thinking
If you follow me on LinkedIn, you might have seen my post about AlphaEvolve and why it’s such a big deal: Post
Google has taken those same principles and built them into their newest AI model, Gemini 2.5, through something they call “Deep Think” mode.
Why is this special? 🤔
Think about it this way:
Instead of just giving you the first answer that comes to mind (like we humans often do), Deep Think mode approaches problems from multiple angles simultaneously. It’s like having several expert minds working on a problem together, each taking a different approach, and then combining their best insights.
This isn’t just about solving math problems faster but about AI that can:
- Find creative solutions to everyday problems
- Come up with ideas no one has thought of before
- Help businesses discover new opportunities
- Assist researchers in making breakthrough discoveries
And for the reasons mentioned above, Google offers this option for $274.99/month (currently US only).
If you’re not a company looking to discover something new, you don’t even need this. The basic plan is much cheaper and includes:
AI That Actually Helps in the Real World
Google is putting this enhanced AI everywhere, but in ways that actually make sense for our daily lives.
Some important ones:
1. AI Mode in Search
- A new tab in Google Search that lets you have conversations
- Ask complex questions and follow-up queries conversationally
- Get AI-generated answers that feel like chatting with a knowledgeable friend
2. Shopping Gets Smarter
- Virtual try-on: See how clothes look on YOU before buying
- AI tracks prices and can even handle checkout when items hit your target price
- It’s like having a personal shopping assistant who never sleeps!
3. Breaking Language Barriers
New things in Google Meet:
- Real-time translation that keeps your voice and tone
- Preserves facial expressions and emotion
- Makes it feel like you’re speaking directly in another language
4. Video Creation
- If Sora from OpenAI was the leading product until now, it has just been surpassed by Veo 3
- They’re taking a step further with a product that uses Veo 3 in combination with Gemini Pro to generate stories or film clips: Flow
Making AI More Personal
Google is working hard to make AI feel more like a personal assistant than a generic tool. The Gemini app can now:
- Learn your writing style
- Understand your preferences
- Access (with your permission) your documents to give more relevant help
- Create custom content just for you
They’re developing something called “agent mode” where AI can actually help you get things done across different apps and websites. Imagine saying, “Help me plan my vacation,” and having AI assist with everything from finding flights to booking hotels – all while keeping your preferences in mind.
This is in some way alternative to Codex, which was recently presented by OpenAI (for coding).
Bridging Design and Development with Stitch
While it was barely mentioned at the I/O event, I stumbled upon Google’s new experiment “Stitch” and found it incredibly promising for our Ideas Universe toolbox. It’s a tool that turns the app creation process on its head by allowing anyone to design UIs and generate functional code in minutes.
Why?
- Generate UI designs from simple text descriptions
- Upload sketches, wireframes, or screenshots and test how they transform into designs
- Quickly create and compare multiple design variants
- Export to Figma for further refinement
- Automatically generate functional frontend code ready for development
What I love about Stitch is how it further democratizes app creation.
I still need to test it, but I imagine that since it doesn’t contain additional logic, design iterations are faster. When you’re satisfied with the designs, you move on to code and add logic.
The traditional gap between having an idea and getting to a working prototype has been massive. Now, whether you’re a professional developer or someone with zero coding experience, you can go from concept to functional UI in a fraction of the time it used to take.
Try it and let me know what you think:
https://stitch.withgoogle.com/
The Future is Closer Than We Think
What strikes me most about these announcements is how quickly things are moving from “interesting research” to “useful tools we can actually use.” The AI that just broke a 56-year-old mathematical record is already being integrated into tools we’ll use daily.
Currently, Google has the fastest AI product launches.
What’s Next?
I’m particularly excited to try out these new tools and share my experiences with you. Unfortunately, most of them are currently only available in the US, but if you keep reading my future newsletters, you’ll be the first to know when they become available in our region and how they can help us in our daily work.
Talk soon,
Primož