Meta releases its new Llama 3 open-source AI model. Is it enough to keep Meta at the front of the pack?
Meta today debuted updated versions of its free Llama AI model, which has been among the most popular open-source models in the market but which has faced increasing competition from both other open-source contenders and companies offering paid closed access models. Called Llama 3, the new set of models represent Meta’s attempt to match some of the capabilities currently being offered by rivals such as OpenAI, Anthropic, and Google in their latest models, but which so far have only been available in closed, paid proprietary services.
Meta said it wants the most capable Llama 3 models to be multimodal, meaning they can take in text, images, and even video and then generate outputs in all of those different formats. Meta is also aiming to make the models multilingual, with larger “context windows,” meaning they can be fed ample amounts of data to analyze or summarize. (Larger context windows have also been shown to reduce a model’s hallucination rate, or how often it outputs inaccurate information in response to a prompt.) The Llama models boast improved reasoning and coding capabilities, too, according to Meta.
As many industry-watchers had expected, the company has initially released two smaller versions of Llama 3, which it said in a press release “are the best open-source models of their class, period” and will be available shortly on platforms including AWS, Google Cloud, Databricks, Microsoft Azure, and Hugging Face. But these models are not as capable as some of the most high-performing proprietary models in the market.
A much larger version of Llama 3—with more than 400 billion parameters—is still being trained, with the company saying it will make a decision on whether and how to release it following safety testing in the coming months.
But Ragavan Srinivasan, Meta’s VP of product in charge of Llama, told Fortune that this larger version is “trending to be on par with some of the best-in-class proprietary models that you see out in the market today,” adding that it will have additional capabilities “baked into it.” These capabilities would match or exceed what is currently on offer from models such as Claude 3, Gemini, or GPT-4.
The Llama 3 announcement accompanied the release of a new version of Meta AI, the company’s assistant that will now be powered by Llama 3 and built into the search box at the top of WhatsApp, Instagram, Facebook, and Messenger—integrating real-time knowledge from Google and Bing into its answers.
“We believe Meta AI is now the most intelligent AI assistant that you can freely use,” Meta CEO Mark Zuckerberg said as part of the announcement. Meta’s AI assistant received mixed reviews after its original launch in September 2023, particularly its cast of celebrity AI characters that sometimes responded inappropriately. And even as recently as yesterday, there were reports that Meta’s AI unexpectedly commented in a Facebook group for parents.
A highly competitive AI model landscape
Meta is launching Llama 3 into a generative AI landscape that is far different from the one that greeted its predecessor, Llama 2, when it debuted last summer. Since then, open-source AI has exploded, even as debate has swirled over the security and safety of AI models that allow users open access to source code and model weights.
The Paris-based Mistral burst onto the scene in June 2023, cofounded by former Meta researchers—and it has released a variety of well-received open-source models, while just this week it was reportedly seeking a $5 billion valuation. Two months ago Google released Gemma, open models built from the same research and technology as its proprietary Gemini.
Meanwhile, the capabilities of proprietary models such as those developed by OpenAI, Google, and Anthropic keep advancing—but at ever greater costs, given the massive amounts of compute necessary to train them. Meta, in fact, is one of the Big Tech leaders in spending on training and running models: In January, Mark Zuckerberg indicated that Meta is spending billions of dollars on Nvidia AI chips, saying that by the end of 2024, the company’s computing infrastructure will include 350,000 H100s. But Meta has also committed to making its models freely available as open-source products in hopes that it will control the platform on which others are building and eventually find a way to monetize that position. It’s an expensive strategy with no certain path to profits anytime soon.
Finally, the war for AI talent continues to heat up, with fierce competition for top researchers and many former Big Tech engineers jumping ship to launch their own startups. As Fortune recently reported, Meta has seen its own AI brain drain of late, with several high-level departures, including its senior director of generative AI. This has significant impact on the generative AI race that Zuckerberg is running: If Meta wants to stay ahead, it needs to make sure it can hold on to the top AI talent most qualified to build these models. Conversely, building best-in-breed models helps attract the top talent, who are generally drawn to the most ambitious AI labs.
AI is Meta’s top priority
AI has become Meta’s top priority, replacing the company’s earlier emphasis on the metaverse, so it clearly plans to do what it takes to stand out in a crowded field. Last October, Zuckerberg said that “AI will be our biggest investment area in 2024, both in engineering and computer resources.” As part of today’s Llama announcement, he doubled down on that theme, saying, “We’re investing massively to build the leading AI.”
Meta has also been a longtime champion of open-source research. It created an open-source ecosystem around the PyTorch framework and recently celebrated the 10th anniversary of FAIR (Fundamental AI Research), which was created “to advance the state of the art of AI through open research for the benefit of all” and has been led by Meta chief scientist Yann LeCun.
It was LeCun who pushed for Llama 2 to be released with a commercial license along with the model weights. “I advocated for this internally,” he said at the AI Native conference in September 2023. “I thought it was inevitable, because large language models are going to become a basic infrastructure that everybody is going to use; it has to be open.”
Manohar Paluri, a Meta vice president in the AI organization, told Fortune that today’s intense open-source AI competition makes the company feel “supported and validated in our mission of accelerating innovation and doing it in the open so we can build safer and more productive models that get better and better with each iteration.” The more models that are built on top of each other, including on Llama, “the faster we can actually make progress on enabling more use cases for the end users.”
Digging into Meta’s Llama 3 data
Meta did not disclose specifics about what data was used to train Llama 3, other than to emphasize that it was trained on a “variety of public data”—which the company previously admitted includes public Facebook and Instagram posts—and “excluded and removed data from certain sources known to contain a high volume of personal information about private individuals.”
Meta did say that the training dataset was seven times as large as the one used to train the previous version, Llama 2, and includes four times as much as code. Over 5% of the Llama 3 pretraining dataset consists of “high-quality non-English data that covers over 30 languages.”
In addition, Meta disclosed that previous versions of Llama were used to train Llama 3. “We found that previous generations of Llama are good at identifying high-quality data, so we used Llama 2 to build the text-quality classifiers that are used to power Llama 3. We also leveraged synthetic data to train in areas such as coding, reasoning, and long context. For example, we used synthetic data to create longer documents.”
Meta’s race for AI dominance
But as Llama 3 leaps into the wild, the bottom line is that Meta will have to keep running fast—and spending big—in order to fulfill Mark Zuckerberg’s bid for AI dominance. Catching up to OpenAI while still promoting open source will be no small feat, but if any organization has the will, and the money, to make it happen, it is Meta.
Of course, the larger question is why? What’s in all this for Meta? Strategically, Zuckerberg seems committed to ensuring that if AI is the next big platform shift—as many investors and analysts contend—that Meta is not beholden to another platform. Zuckerberg sees leadership in the AI race as the way Meta can remain the master of its own fate.
Correction, April 18: An earlier version of this story erroneously stated that Meta had already decided to release the largest and most powerful version of Llama 3 as an open-source AI model. The story has been updated to clarify that Meta has not yet made this decision, and to provide additional context about the model’s size and future safety testing.