Generative AI

Monster API uses generative AI to help anyone build generative AI


A startup called Monster API, officially known as Generative Cloud Inc., has a GPT-powered agent that simplifies and accelerates the fine-tuning and deployment of open-source generative artificial intelligence models.

According to the company, MonsterGPT helps reduce the time it takes to fine-tune and implement generative AI applications based on open-source models such as Llama 3 and Mistral to as little as just 10 minutes.

Open-source generative AI models like Llama 3 have become incredibly popular with companies looking for an alternative to the expensive, proprietary models created by OpenAI and Google LLC. However, the process of creating an application such as a conversational assistant is still extremely tricky.

For instance, if a company is looking to build a customer service bot that can deal with common problems, the open-source model it uses must be fine-tuned on the company’s own data and knowledge bases, so it can learn about the various products, services and policies needed to inform its responses.

Doing this is not easy and is strictly performed by developers and data scientists, who must adjust as many as 30 variables. This requires both knowledge of sophisticated AI optimization frameworks, and an understanding of the underlying infrastructure, such as graphics processing unit-based cloud steps, containerization and Kubernetes.

Monster API says fine-tuning AI models often becomes a major undertaking for companies, involving as many as 10 specialized engineers and dozens of hours of work. That’s assuming the company actually has employees with the required skill sets. There are many that do not.

Using generative AI to create generative AI

It’s this process that Monster API is trying to solve. And it has decided that, in order to help fine-tune and customize generative AI models, the best approach is to turn to generative AI itself. Using its API, developers can simply utter a command like “fine tune Llama 3” and the Monster API will get to work.

“For the first time, we’re offering a solution based on an agent-driven approach, for Generative AI,” Monster API’s chief executive officer Saurabh Vij told SiliconANGLE. “The ease and speed of this process is like flying in a Mach 4 supersonic jet from New York to London in 90 minutes. At the end of this blazing fast process, MonsterGPT provides developers with an API endpoint for their custom fine-tuned models.”

Compared to traditional AI development, Monster API enables a more “use-case”-oriented approach, where users can simply specify the task they want their model to achieve, such as sentiment analysis or code generation, and it will create the most optimal model to perform that task.

According to Vij, there’s going to be big interest in this offering, as the majority of small teams, startups and indie developers are not particularly well-versed in the art of fine-tuning and deploying AI models. “Most developers are not experts in the deeper nuances of how different models and architecture works, and they don’t have experience in dealing with complicated cloud and GPU infrastructure,” he said.

Vij said he envisages a future world in which just about everybody can become a programmer. That’s because they won’t need any coding expertise, as they will simply be able to command generative AI to create the code for them, using their natural language. “All of our research and design is driven to accelerate toward this future faster,” he added.

Open source vs closed-source AI

Monster API believes its API-based approach to generative AI development mirrors historic advances in technology, such as the first Macintosh computer, which introduced the concept of personal computing in the 1980s, and Mosaic, the first easy-to-use web browser that democratized the internet by making it accessible to anyone.

The company wants to democratize access to generative AI development in the same way, and to do that, it’s necessary to focus on open-source models rather than their closed-source alternatives.

Vij said the rivalry between open- and closed-source AI has its own historical precedent in the shape of the battle for mobile dominance that emerged between Apple Inc. and Google LLC’s open-source Android last decade. “Just as Android offers a flexible alternative to Apple’s tightly controlled ecosystem, there’s a concerted effort to enhance open-source AI models as a rival to proprietary giants like OpenAI’s GPT-4,” he said.

Vij believes Monster API will aid the open-source AI cause by making it much easier for non-skilled persons to build AI applications. These users will almost certainly opt for open-source models rather than proprietary alternatives, he said, because proprietary versions such as GPT-4 tend to be generalized rather than specialized.

He explained that most businesses require domain-specific AI models, which means the base model has to be fine-tuned. But the fine-tuning process for closed-source models is often restricted to just a few techniques offered by the vendor itself, and it can be extremely expensive too, he said.

“With the open-source world, developers get the freedom to try multiple frameworks from Q-LORA to DPO,” Vij added. “They can experiment with different techniques and choose the best suited for their use case and budget.”

Monster API’s agent is said to leverage powerful AI frameworks such as Q-LORA to enable fine-tuning, and vLLM for deploying customized models. It provides a simple, unified interface that encompasses the entire development lifecycle, from initial fine-tuning to deployment. “MonsterGPT reduces the time and friction for launching fine-tuning experiments and deployments,” Vij said. “Users also get more optionality around the optimization algorithms they can use to significantly improve their throughput and latency.”

The other advantage of MonsterGPT is that users don’t need to learn about the multitude of possible cloud infrastructure setups, GPU resources and configurations available. That’s because it will automatically select the most appropriate infrastructure, based on the user’s budget and goals.

Vij promised that this will be particularly beneficial, because for many use cases and applications, a smaller, fine-tuned model is more than sufficient. For instance, if a company uses OpenAI’s most powerful model, GPT-4 Turbo for a simple customer service chatbot, that’s most likely overkill, since its capabilities far exceed that sort of use case. By using a smaller model optimized for that application, companies will enjoy substantially lower costs.

“Smaller models can fit inside smaller and more affordable commodity GPUs,” Vij explained, adding that this is another advantage open-source has. “With closed-source models, developers can’t do anything — they’re forced to use much bigger, more generalized models.”

Holger Mueller of Constellation Research Inc. said he’s impressed with the concept of MonsterGPT, which represents a major advance in the recursive use of generative AI. “Similar to how we already use robots to build more robots, we’re now using AI software to write more AI software,” he said.

The analyst pointed out that it’s important that humans remain in the loop when it comes to such use cases, since there needs to be some kind of oversight. But the technology is extremely promising for its ability to lower the barrier to entry for generative AI development, he said.

“Monster API aims to help anyone who can speak and validate an AI model to successfully build and deploy one, without special skills,” Mueller explained. “As with all new technology, and especially in AI, we need to apply a healthy level of skepticism until we can see working use cases in production. But for now Monster API deserves kudos for its efforts to democratize AI development and change the future of work for developers and business users with no AI experience.”

At launch, Monster API supports more than 30 popular open-source LLMs, including Meta Platforms Inc.’s Llama 3, Mistral’s Mistral-7B-Instruct-v0.2, Microsoft Corp’s microsoft/phi-2, and Stability AI Ltd.’s sdxl-base.

Image: Microsoft Designer

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU



Source

Related Articles

Back to top button