At I/O 2024, Google Announces Generative AI in Everything
The annual Google I/O keynote usually covers a wide array of products and services at Google, but this year’s event was a little different. Google still talked about plenty of different products, but it was all tied to a single feature: AI. Google is now unabashedly focused on adding its latest AI models to every product it makes. For better or worse, you’re going to be interacting with Gemini a lot more than ever before.
For most people, to search the web is to use Google, but the way Google delivers information is about to change. The experimental “Search Generative Experience” that Google began testing last year is becoming the default way to experience its search engine. Starting today, that feature, now known as AI Overviews, will come to everyone in the US. It will expand to other regions in the coming months, bringing AI search results to billions of people this year.
AI Overviews won’t totally replace the traditional list of links, but all that content is going to be pushed way down the page. Instead, Google will ingest and summarize the web to answer your questions, and it claims the system will work well enough to handle complex queries with multiple elements. There will be a few citation links in these summaries, but most of the data that fed Google’s model won’t get top billing. In fact, some types of searches will trigger a fully AI-optimized results page. For example, asking about local attractions or cooking might show you a page created by AI without the usual list of web links.
The expansion of AI extends beyond search—there’s scarcely a single product that didn’t get an AI namecheck in the keynote. Photos is getting smarter AI-infused searching, Gmail will have Gemini-enhanced search and smart replies, and Android is now a Gemini playground. Google even demoed a feature for Google Workspace that will let companies create AI team members. They’ll have names, personal responsibilities, and personalities derived from the context of work chats, emails, and documents. This seems like a dream come true for companies looking to replace people with robots.
For AI developers and people who want to use these models at the highest level, Google said it has enhanced its Gemini Pro model and plans to expand the 1 million token limit to 2 million in the coming months. There’s also a new model called Gemini 1.5 Flash, which is designed to be faster and lighter-weight. Google says Gemini Flash is just as powerful as Gemini Pro, but it was a bit vague on why it’s faster.
Google’s Dave Burke took the stage to discuss AI on Android, and he didn’t mention Assistant once. It’s probably only a matter of time before Google retires that system and transitions everyone to Gemini—it’s only missing a few key Assistant features at this point. Currently, you have to download an app, but I expect Gemini will eventually be integrated into the billions of Android phones in use around the world.
As part of the Android AI push, Google has given the Gemini the ability to understand the content of your phone screen. Remember Google Now On Tap? It’s basically that feature, except it works. Mostly. Gemini still seems very cautious. I fed it a screen of the ExtremeTech team chat, but Gemini refused to answer any questions about it. You see, I made a joke about taking a shot each time Google said “Gemini,” which was met with a lighthearted accusation that I wanted to kill or maim my coworkers. It was funny, but Gemini didn’t see it that way. Google’s AI said I was an “urgent safety threat” and my coworkers should “Tell a trusted adult.” Weird suggestion, Gemini.
It can be hard to keep track of all of Google’s new AI initiatives because they’re all interconnected. Google talked about several specialized AI models that will feed into one product or another “in the coming weeks,” and it showed demos for products that don’t quite exist, but elements of them are apparently destined for integration with existing products. For instance, there’s Project Astra, which is Google’s attempt to build an artificial general intelligence (AGI).
Google unveiled Gemini with a demo video it later had to admit it faked, but it showed a version of Astra reacting live to a camera feed in similar ways. It was pretty impressive. Parts of Astra will arrive in a new Gemini feature called Gemini Live this summer, giving you the ability to carry on a real-time voice conversation with the chatbot.
Google knows how this looks to everyone. It’s been talking about AI for years, but the actual consumer-facing implementations were limited. These past 18 months have seen generative AI integrated throughout the company at a breakneck pace. At the end of the keynote, CEO Sundar Pichai joked that if anyone was wondering how many times they said “AI” in the keynote, there’s no need to count. They fed the event script into Gemini, which helpfully counted up 120 uses. So, that’s the record to break next year.