Intel, VMware, Linux Foundation & Others Form Open Platform for Enterprise AI
In order to provide open frameworks for generative AI capabilities across ecosystems, such as retrieval-augmented generation, the Linux Foundation, Intel and other companies and groups have created the Open Platform for Enterprise AI.
What is the Open Platform for Enterprise AI?
OPEA is a sandbox project within the LF AI & Data Foundation, a part of the Linux Foundation. The plan is to encourage adoption of open generative AI technologies and create “flexible, scalable GenAI systems that harness the best open source innovation from across the ecosystem,” according to a press release about OPEA.
The following companies and groups have joined the initiative:
- Anyscale.
- Cloudera.
- DataStax.
- Domino Data Lab.
- Hugging Face.
- Intel.
- KX.
- MariaDB Foundation.
- MinIO.
- Qdrant.
- Red Hat.
- SAS.
- VMware (acquired by Broadcom).
- Yellowbrick Data.
- Zilliz.
Ideally, the initiative could result in more interoperability between products and services from those vendors.
“As GenAI matures, integration into existing IT is a natural and necessary step,” said Kaj Arnö, chief executive officer of MariaDB Foundation, in a press release from OPEA.
What did OPEA create?
The idea is to find new use cases for AI, particularly vertically up the technology stack, through an open, collaborative governance model. In order to do so, OPEA created a framework of composable building blocks for generative AI systems, from training to data storage and prompts. OPEA also created an assessment for grading the performance, features, trustworthiness and enterprise-grade readiness of generative AI systems and blueprints for RAG component stack structure and workflows.
Intel, in particular, will provide the following:
- A technical conceptual framework.
- Reference implementations for deploying generative AI on Intel Xeon processors and Intel Gaudi AI accelerators.
- More infrastructure capacity in the Intel Tiber Developer Cloud for ecosystem development, AI acceleration and validation of RAG and future pipelines.
“Advocating for a foundation of open source and standards – from datasets to formats to APIs and models, enables organizations and enterprises to build transparently,” said A. B. Periasamy, chief executive officer and co-founder of MinIO, in a press release from OMEA. “The AI data infrastructure must also be built on these open principles.”
Why is RAG so important?
Retrieval-augmented generation, in which generative AI models check with real-world company or public data before providing an answer, is proving valuable in enterprise use of generative AI. RAG helps companies trust that generative AI won’t spit out convincing-sounding nonsense answers. OPEA hopes RAG (Figure A) could let generative AI pull more value from the data repositories companies already have.
Figure A
“We’re thrilled to welcome OPEA to LF AI & Data with the promise to offer open source, standardized, modular and heterogenous Retrieval-Augmented Generation (RAG) pipelines for enterprises with a focus on open model development, hardened and optimized support of various compilers and toolchains,” said LF AI & Data Executive Director Ibrahim Haddad in a press release.
There are no de facto standards for deploying RAG, Intel pointed out in its announcement post; OPEA aims to fill that gap.
SEE: We named RAG one of the top AI trends of 2024.
“We are seeing tremendous enthusiasm among our customer base for RAG,” said Chris Wolf, global head of AI and advanced services at Broadcom, in a press release from OPEA.
“The constructs behind RAG can be universally applied to a variety of use cases, making a community-driven approach that drives consistency and interoperability for RAG applications an important step forward in helping all organizations to safely embrace the many benefits that AI has to offer,” Wolf added.
How can organizations participate in OPEA?
Organizations can get involved by contributing on GitHub or contacting OPEA.