Generative AI

Intel’s advanced AI processors at Red Hat Summit


During Red Hat Summit in Denver this week, Red Hat Inc. announced plans to bring Intel Corp.’s most advanced artificial intelligence processors to its customers.

By making cloud-hosted versions of Intel’s Gaudi AI accelerators, along with the chipmaker’s Ultra and Core central processing units, Xeon, and the Arc graphics processing unit available, Red Hat is acknowledging the role that major chip suppliers such as Intel are having in the model-driven world of AI.

“The history of AI is really intertwined with the history of computing … we’ve had this massive explosion in data that we’ve been trying to harness the power of over the past couple of decades,” said Jeni Barovian (pictured, left), vice president and general manager of data center AI solutions strategy and product management at Intel. “Now what you have is the compute, the computational capability really catching up with the compute resources in these machines.”

Barovian spoke with theCUBE Research’s Rob Stretchay and Paul Gillin at Red Hat Summit, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. She was joined by Steven Huels (right), vice president and general manager of the AI Business Unit at Red Hat Inc., and they discussed key elements of the Red Hat/Intel partnership in facilitating generative AI development. (* Disclosure below.)

Intel plays role in Red Hat’s ecosystem for model deployment

From Red Hat’s perspective, demand for computational resources to run AI models requires an ability to operate across different hardware footprints. Red Hat draws on its ecosystem partners such as Intel to supply effective and efficient processor technology.

TheCUBE interviews Jeni Barovian of Intel and Steven Huels of Red Hat at Red Hat Summit.

“What we talk to [customers] about a lot is our hybrid cloud story, especially with OpenShift AI where we can provide a core platform that allows you to build, train, tune, deploy, monitor and manage those models,” Huels said. “Underpinning that is a lot of our ecosystem. We give them the optionality based on the type of workload, the specific use case.”

Last month, the Linux Foundation announced the Open Platform for Enterprise AI, or OPEA, as its latest Sandbox project. Intel and Red Hat are both founding members in an initiative to champion development of open and composable generative AI systems.

“Open source has been critical to everything we’ve done in partnership with Red Hat over the course of the past 30 years or so, and that’s certainly what we’re driving now with our approach to AI,” Barovian said. “There are about 15 companies that are signed up at this point. It’s really trying to take an open and community approach to accelerating that pace of innovation, that pace of development so enterprises can really harness the power of generative AI.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE Research’s coverage of Red Hat Summit:

(* Disclosure: Intel Corp. sponsored this segment of theCUBE. Neither Intel nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo by SiliconANGLE

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU



Source

Related Articles

Back to top button