Generative AI

Meta Unveils Latest AI Chip as it Races to Gen AI Ambitions


“We’re designing our custom silicon to work in cooperation with our existing infrastructure as well as with new, more advanced hardware (including next-generation GPUs) that we may leverage in the future,” the company added. “Meeting our ambitions for our custom silicon means investing not only in compute silicon but also in memory bandwidth, networking and capacity as well as other next-generation hardware systems.

“We currently have several programs underway aimed at expanding the scope of MTIA, including support for generative AI (Gen AI) workloads.

Meta building compute infrastructure to support AI workloads

Meta has previously announced plans to build out massive compute infrastructure to help support its Gen AI ambitions, including the latest version of its open source Llama LLM, Llama 3, which is set to release in 2024. In a statement earlier this year, Meta CEO Mark Zuckerberg said that the company was bringing its AI research team ‘closer together’ and that it was building out its compute infrastructure to support its future roadmap, which includes a further push into AI and a move towards artificial general intelligence.

To meet this demand, Meta, which turned 20 years old in February 2024, plans to have approximately 350,000 H100 GPUs from chip designer Nvidia by the end of 2024, Zuckerberg said. This, in combination with equivalent chips from other suppliers, Meta will have around 600,000 total GPUs by the end of the year.

“Our long term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit,” Zuckerberg said. 

“We’re currently training our next-gen model Llama 3, and we’re building massive compute infrastructure to support our future roadmap, including 350,000 H100s by the end of this year – and overall almost 600,000 H100s equivalents of compute if you include other GPUs.”



Source

Related Articles

Back to top button