Generative AI

What’s the Killer AI App for Consumers? Google Finally Has a Contender


Google CEO Sundar Pichai
Boris Streubel/Getty Images

  • Google showcases potential killer apps for generative AI.
  • The company demoed an AI agent that can help you remember where you left your glasses. 
  • Google’s infrastructure, talent, data, and experience makes it a strong player in developing useful AI tech.

Since ChatGPT burst onto the scene in 2022, there’s been no real “killer app” to get consumers embracing AI in massive numbers.

Even ChatGPT may not count: The chatbot’s online traffic is still only about 2% of Google‘s, according to Similarweb. Other chatbots are doing much worse, leaving investors mostly focused on corporate use cases.

A killer app is an application that is so useful and so easy to use that it convinces everyday people to adopt a whole new technology en masse.

Spreadsheets and word-processing software made many individuals buy personal computers for the first time. The internet, possibly the biggest killer app of all, made us all buy smartphones, tablets, and a host of other connected devices.

So, what will be the killer app for generative AI? Put another way: My mom has never used ChatGPT, but she Googles stuff all the time. What will get this octogenarian, and everyone else, using genAI as often as they use toothbrushes?

AI killer app contenders

At its IO developer conference on Tuesday, Google showed off some pretty amazing AI killer app contenders.

These were shared mostly under the umbrella of Project Astra, an experimental Google endeavor at the leading edge of AI models and agents.

“To be truly useful, an agent needs to understand and respond to the complex and dynamic world just like people do — and take in and remember what it sees and hears to understand context and take action,” Demis Hassabis, CEO of Google DeepMind, said. “It also needs to be proactive, teachable and personal, so users can talk to it naturally and without lag or delay.”

Never forget where you left your glasses again

In a video, Google showed an employee holding up a smartphone with the camera on. She walked through DeepMind’s office in London pointing the device at various things and asking questions.

The camera at one point showed a speaker and she asked what it was. A Google AI model lurking on the phone (and in the cloud) answered correctly.

Then she pointed the smartphone at a colleague’s computer screen, which had a bunch of software code on it. The AI model correctly told her what that code was for, just by “looking” at the live video feed from the camera.

After a couple more examples, the DeepMind employee asked if the AI agent remembered where she left her glasses. The Google system replied that she’d left them next to an apple on her desk in the office. She walked over there and, lo and behold, there were her glasses by the apple on her desk. The AI agent “remembered” the glasses in the background of previous frames from the phone’s live video feed.

If Google’s AI agent can help regular people never lose their glasses ever again (or their keys or other stuff at home or at work), then I think we have a killer app.

Simple, useful, and quirky things like this can turn wonky technology into products everyone uses. For instance, famed investor Warren Buffett never bought a personal computer, until he wanted to play chess online with Bill Gates.

Returning shoes

Other Google executives discussed similarly compelling, everyday applications for genAI.

CEO Sundar Pichai said the company’s AI agents can plan and execute multiple tasks — to the point where the bots will be able to return a pair of shoes you ordered online and don’t want.

Calendar entries

Google VP Sissie Hsiao showed off another killer application for this new technology.

In this demo, a smartphone camera was pointed at a school flier with details of several upcoming events. The Google AI agent captured all the dates, times, and other details and automatically loaded them into the user’s Google Calendar.

Rental agreements

What if you want to know how a pet might change your apartment rental situation? Do you want to actually read the 12 legal documents you skimmed and signed last year? Of course you don’t.

You can now drop all these documents into Google’s Gemini Advanced AI model and start asking it questions like “If I get a pet, how does this change my rental situation?”

Google’s AI agent will ingest all the documents quickly and answer your questions by referencing specific parts of the agreements.

If generative AI can do annoying, boring tasks like this, a lot of regular people will start using the technology pretty quickly.

“Google was built for this moment”

When done well, these AI agent tasks will seem easy. But Google has been working behind the scenes for at least a decade to get to this point.

This type of technology requires massive computing power, lots of energy, huge data centers, muscular AI chips, lightning fast networking gear, and oodles of information to train the models. Google has all this in spades.

DeepMind’s Hassabis gave a little taste of this when discussing Project Astra’s ability to respond to questions during live video feeds.

“These agents were built on our Gemini model and other task specific models, and were designed to process information faster by continuously encoding video frames, combining the video and speech input into a timeline of events, and caching this information for efficient recall,” he explained.

There are very few other companies with the infrastructure, talent, data, and experience to pull this off. (Maybe OpenAI and Microsoft together?)

“Google was built for this moment,” Pichai said.



Source

Related Articles

Back to top button