Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024
Google’s gunning for OpenAI’s Sora with Veo, an AI model that can create 1080p video clips around a minute long given a text prompt.
Unveiled on Tuesday at Google’s I/O 2024 developer conference, Veo can capture different visual and cinematic styles, including shots of landscapes and time lapses, and make edits and adjustments to already generated footage.
“We’re exploring features like storyboarding and generating longer scenes to see what Veo can do,” Demis Hassabis, head of Google’s AI R&D lab DeepMind, told reporters during a virtual roundtable. “We’ve made incredible progress on video.”
Veo builds on Google’s preliminary commercial work in video generation, previewed in April, which tapped the company’s Imagen 2 family of image-generating models to create looping video clips.
But unlike the Imagen 2-based tool, which could only create low-resolution, few-seconds-long videos, Veo appears to be competitive with today’s leading video generation models — not only Sora, but models from startups like Pika, Runway and Irreverent Labs.
In a briefing, Douglas Eck, who leads research efforts at DeepMind in generative media, showed me some cherry-picked examples of what Veo can do. One in particular — an aerial view of a bustling beach — demonstrated Veo’s strengths over rival video models, he said.
“The detail of all the swimmers on the beach has proven to be hard for both image and video generation models — having that many moving characters,” he said. “If you look closely, the surf looks pretty good. And the sense of the prompt word ‘bustling,’ I would argue, is captured with all the people — the lively beachfront filled with sunbathers.”
Veo was trained on lots of footage. That’s generally how it works with generative AI models: Fed example after example of some form of data, the models pick up on patterns in the data that enable them to generate new data — videos, in Veo’s case.
Where did the footage to train Veo come from? Eck wouldn’t say precisely, but he did admit that some might’ve been sourced from Google’s own YouTube.
“Google models may be trained on some YouTube content, but always in accordance with our agreement with YouTube creators,” he said.
The “agreement” part may technically be true. But it’s also true that, considering YouTube’s network effects, creators don’t have much choice but to play by Google’s rules if they hope to reach the widest possible audience.
Reporting by The New York Times in April revealed that Google broadened its terms of service last year in part to allow the company to tap more data to train its AI models. Under the old ToS, it wasn’t clear whether Google could use YouTube data to build products beyond the video platform. Not so under the new terms, which loosen the reins considerably.
Google’s far from the only tech giant leveraging vast amounts of user data to train in-house models. (See: Meta.) But what’s sure to disappoint some creators is Eck’s insistence that Google’s setting the “gold standard,” here, ethics-wise.
“The solution to this [training data] challenge will be found with getting all of the stakeholders together to figure out what are the next steps,” he said. “Until we make those steps with the stakeholders — we’re talking about the film industry, the music industry, artists themselves — we won’t move fast.”
Yet Google’s already made Veo available to select creators, including Donald Glover (AKA Childish Gambino) and his creative agency Gilga. (Like OpenAI with Sora, Google’s positioning Veo as a tool for creatives.)