Weekly AI Recap: Microsoft announces London AI hub and Dove rejects generative AI in ads
Plus, some big developments in the race to build AI chips.
Microsoft announced last month that it launched Microsoft AI, a new initiative aimed at further developing Copilot and the company’s other leading AI-powered products. The effort is being helmed by Mustafa Suleyman, co-founder of DeepMind (now a Google-owned company) and Inflection AI, and Karén Simonyan, another Inflection AI co-founder.
On Sunday, the company followed up with an announcement that Microsoft AI was opening what Suleyman describes in a company blog post as “an AI hub in the heart of London.”
The hub, according to the blog post, “will drive pioneering work to advance state-of-the-art language models and their supporting infrastructure, and to create world-class tooling for foundation models, collaborating closely with our AI teams across Microsoft and with our partners, including OpenAI.”
The company says it will be hiring for positions in the London AI hub over the coming months.
Dove pledges to never use AI to represent ‘real bodies’ in ads
In celebration of the 20th anniversary of its ‘Real Beauty’ campaign, Unilever-owned brand Dove published a statement on Tuesday pledging that it would never use AI to depict women in its advertising. The pledge follows a survey conducted by PR firm Edelman, which found that one in three women across 20 countries feel pressure to alter their physical appearance as a result of being exposed to online images, both real and AI-generated.
The brand has also released the ‘Real Beauty Prompt Playbook,’ a guide intended to help users of popular generative AI platforms create more diverse, inclusive and realistic images of women (as opposed to the more narrow and conventional standards of beauty that such platforms might represent by default).
“The rise of AI poses one of the greatest threats to real beauty in the last 20 years, meaning representation is more important than ever,” Dove wrote on its website.
Intel takes aim at Nvidia with new Gaudi 3 chip
At its Intel Vision 2024 conference on Tuesday, semiconductor manufacturer Intel unveiled its new Gaudi 3 chip, which the company hopes will give it an edge in its ongoing efforts to catch up with the world’s leading chip manufacturers, Advanced Micro Devices (AMD) and Nvidia.
According to Intel, the Gaudi 3 chip delivers 50% better inference (that is, feeding a large language model new data that was not included in its original training dataset) and 40% greater power efficiency than Nvidia’s H100 GPU, which is currently being used by AI industry giants like OpenAI and Google to train and run large language models.
Despite its recent advancements in AI chips, Intel still has a long way to go to catch up with its competitors. The company’s stock is down 25% so far this year, according to Yahoo Finance, while AMD’s and Nvidia’s are up 74% and 15%, respectively.
Meta’s Llama 3 open-source LLM to be released within the next month
Meta executives stated during an event in London on Tuesday that Llama 3, the latest iteration of the company’s open-source large language model, will be released sometime in the coming weeks, according to TechCrunch.
“Within the next month, actually less, hopefully in a very short period of time, we hope to start rolling out our new suite of next-generation foundation models, Llama 3,” Meta president of global affairs Nick Clegg reportedly said during the event. “There will be a number of different models with different capabilities, different versatilities [launched] during the course of this year, starting really very soon.”
The Information reported the previous day that Meta was planning to launch Llama 3 next week.
Since the public release of Llama 2 in July of last year, Meta has sought to position itself as an open-source and, thereby, more developer-friendly alternative to other leading AI companies, like OpenAI and Google, that are building proprietary models. The debate between the two camps has since turned into a philosophical rift running through the AI industry.
Meta has not yet confirmed the number of parameters that have been implemented in the training of Llama 3, but rumors suggest it could be 140bn, roughly twice the amount used in its predecessor, Llama 2 (70bn parameters).
Google Cloud customers to gain access to Axion CPUs ‘later this year’
Google also announced on Tuesday that it’s building custom Arm-based CPUs for data centers. (Arm Limited is a British company that develops components of CPUs and licenses out their IP.)
Axion, as the new CPUs are being called, “deliver industry-leading performance and energy efficiency and will be available to Google Cloud customers later this year,” according to a blog post from Google.
The company says that the CPUs are already being used to power a number of its platforms, including YouTube Ads and Google Earth Engine.
Google Cloud competitors Amazon Web Services (AWS ) and Microsoft Azure have also previously introduced their own Arm-based processors.
For more on the latest happenings in AI, web3 and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter.