Generative AI

OpenAI debuts GPT-4o ‘omni’ model now powering ChatGPT


OpenAI announced a new flagship generative AI model on Monday which they call GPT-4o — the “o” stands for “omni,” referring to the model’s ability to handle text, speech, and video. GPT-4o is set to roll out “iteratively” across the company’s developer and consumer-facing products over the next few weeks.

OpenAI CTO Mira Murati said that GPT-4o provides “GPT-4-level” intelligence but improves on GPT-4’s capabilities across multiple modalities and media.

“GPT-4o reasons across voice, text and vision,” Murati said during a streamed presentation at OpenAI’s offices in San Francisco on Monday. “And this is incredibly important, because we’re looking at the future of interaction between ourselves and machines.”

GPT-4 Turbo, OpenAI’s previous “leading “most advanced” model, was trained a combination of images and text, and could analyze images and text to accomplish tasks like extracting text from images or even describing the content of those images. But GPT-4o adds speech to the mix.

What does this enable? A variety of things. 

GPT-4o greatly improves the experience in OpenAI’s AI-powered chatbot, ChatGPT. The platform has long offered a voice mode that transcribes the chatbot’s responses using a text-to-speech model, but GPT-4o supercharges this, allowing users to interact with ChatGPT more like an assistant. 

For example, users can ask the GPT-4o-powered ChatGPT a question, and interrupt ChatGPT while it’s answering. The model delivers “real time” responsiveness, OpenAI says, and can even pick up on nuances in a user’s voice, in response generating voices in “a range of different emotive styles” (including singing). 

GPT-4o upgrades ChatGPT’s vision capabilities in addition. Given a photo — or a desktop screen — ChatGPT can now quickly answer related questions, from topics ranging from “What’s going on in this software code?” to “What brand of shirt is this person wearing?”

ChatGPT’s desktop app in use in a coding task.
Image Credits: OpenAI

These features will evolve further in the future, Murati says. While today GPT-4o can look at a picture of a menu in a different language and translate it, in the future, the model could allow ChatGPT to, for instance, “watch” a live sports game and explain the rules to you.

“We know that these models get more and more complex, but we want the experience of interaction to actually become more natural, easy, and for you not to focus on the UI at all, but just focus on the collaboration with ChatGPT,” Murati said.

GPT-4o is more multilingual as well, OpenAI claims, with improved performance across 50 different languages. In OpenAI’s API, the company says GPT-4o is twice as fast as GPT-4 (specifically GPT-4 Turbo), half the price and has higher rate limits. 

Voice isn’t a part of the GPT-4o API for all customers at present. OpenAI, citing the risk of misuse, says that it plans to first launch support for GPT-4o’s new audio capabilities to “a small group of trusted partners” in the coming weeks.

GPT-4o is available in the free tier of ChatGPT starting today, and to subscribers to OpenAI’s premium ChatGPT Plus and Team plans with “5x higher” message limits. (OpenAI notes that ChatGPT will automatically switch to GPT-3.5 when users hit the rate limit.) OpenAI says that it’ll roll out the improved voice experience underpinned by GPT-4o in alpha to Plus users in the next month or so, alongside Enterprise options with GPT-4o.

In other news, OpenAI is releasing a refreshed ChatGPT UI on the web with a new, “more conversational” home screen and message layout, and a desktop version of ChatGPT for macOS that lets users ask questions via a keyboard shortcut or take and discuss screenshots. Plus users will get access first, starting today, and a Windows version of the app will arrive later in the year.

Elsewhere, access to the GPT Store, OpenAI’s library of third-party chatbots built on its AI models, is now available to users of ChatGPT’s free tier. And free users can take advantage of features that were formerly paywalled, like a memory capability that lets ChatGPT “remember” preferences for future interactions.

Read more about OpenAI's Spring Event on TechCrunch



Source

Related Articles

Back to top button