121. That’s the number of times Alphabet CEO Sundar Pichai used the term AI during his keynote speech at the annual Google I/O Event. The longtime Google head shared the number, stating that someone was sure to count so he just made it easier for them.
Indeed, AI was the centrepiece of much of the conference with Pichai and other product and project leads announcing new models and capabilities, all powered by Artificial Intelligence, being embedded across Google products from Search to Photos to YouTube.
“Today, all of our 2-billion user products use Gemini. And we’ve introduced new experiences too, including on mobile, where people can interact with Gemini directly through the app, now available on Android and iOS. And through Gemini Advanced which provides access to our most capable models. Over 1 million people have signed up to try it in just three months, and it continues to show strong momentum,” said Pichai.
Despite its rival OpenAI attempting to upstage the event by announcing the launch of its most advanced model ChatGPT-4o just a day before the I/O event Google executives seemed confident in their own company’s capabilities in the area, given the longer duration of their operations.
Another key takeaway was Pichai’s focus on AI agents, literal personal assistants at your fingertips to take care of all of one’s digital needs. “I think about them as intelligent systems that show reasoning, planning, and memory. They are able to “think” multiple steps ahead, and work across software and systems, all to get something done on your behalf, and most importantly, under your supervision.”
“We are still in the early days, but let me show you the kinds of use cases we’re working hard to solve. Let’s start with shopping. It’s pretty fun to shop for shoes, and a lot less fun to return them when they don’t fit. Imagine if Gemini could do all the steps for you: Searching your inbox for the receipt; Locating the order number from your email; Filling out a return form; even scheduling a UPS pickup. That’s much easier, right?” asserted Pichai.
Another standout development was the introduction of Ask Photos to be embedded into Google Photos. Noting that the product was launched almost nine years ago, Pichai said, “Since then, people have used it to organize their most important memories. Today that amounts to more than 6 billion photos and videos uploaded every single day. And people love using Photos to search across their life. With Gemini we’re making that a whole lot easier.”
“Say you’re paying at the parking station, but you can't recall your license plate number. Before, you could search Photos for keywords and then scroll through years’ worth of photos, looking for license plates. Now, you can simply ask Photos. It knows the cars that appear often, it triangulates which one is yours, and tells you the license plate number.”
Speaking further about Gemini, Pichai said one of the most exciting transformations with Gemini has been in Google Search. “In the past year, we’ve answered billions of queries as part of our Search Generative Experience. People are using it to Search in entirely new ways, and asking new types of questions, longer and more complex queries, even searching with photos, and getting back the best the web has to offer.”
“We’ve been testing this experience outside of Labs. And we’re encouraged to see not only an increase in Search usage, but also an increase in user satisfaction. I’m excited to announce that we’ll begin launching this fully-revamped experience, AI Overviews, to everyone in the U.S. this week. And we’ll bring it to more countries soon,” said Pichai.
At the event, Google also announced Project Astra, the company's vision for the future of AI assistants. It answers questions in real time through text, audio or video prompts. Astra's capabilities will now be embedded into the Gemini app.