SmartBear Applies Generative AI Across API Tool Portfolio
SmartBear has added generative artificial intelligence (AI) capabilities to its portfolio of tools for building, testing and monitoring application programming interfaces (APIs).
Dan Faulkner, chief product and technology officer for SmartBear, said SmartBear HaloAI is initially based on large language models (LLMs) from Open AI. However, the company expects to employ multiple LLMs from different providers as use cases continue to evolve.
SmartBear, rather than building LLMs, is focusing its efforts on using the data it collects to leverage capabilities of multiple generative AI platforms that would cost millions of dollars to replicate, he added.
GenAI Capabilities build on Previous AI Investments
Available in beta, SmartBear HaloAI builds on previous AI investments the company has made, including the recent acquisition of Reflect, a provider of a generative AI tool for automating the management of testing. Generative AI has already been used to automate test cases without having to write and debug the scripts previously required. A quality assurance (QA) team that needed to create 500 manual tests averaging five minutes per test, was able to automatically run those tests in five seconds, saving 20 hours of testing per regression cycle.
The overall goal is to make it simpler for existing DevOps teams to create and test APIs at a time when there is still a shortage of IT professionals with those specific skills, noted Faulkner.
In general, SmartBear is moving toward aggregating the tools and platforms it provided into a series of solution hubs that will make it easier to build, test, release and monitor software, noted Faulkner. SmartBear HaloAI will then make it simpler to automate processes across all the elements that make up a solution, he added.
Finding a way to automate the testing of code will be especially critical because as more developers rely on general-purpose AI tools such as ChatGPT to write code the quality of the code being written is likely to decline, noted Faulkner. Generative AI platforms have been trained using code of varying quality from across the internet, so the quality of the output provided can vary widely, he noted.
The only way to mitigate that issue will be to continuously test code using platforms that generate tests using examples that have been vetted by the provider of a test automation platform, said Faulkner. In effect, SmartBear is providing the “special sauce” needed to achieve that goal at scale, he added.
It’s still early days so far as the adoption of generative AI is concerned. However, it won’t be long before these tools are pervasively employed to not only generate code but also automate DevOps workflows. The next major challenge will be orchestrating all the AI assistants optimized to perform specific tasks, such as testing APIs, that will soon be embedded into those workflows.
In the meantime, DevOps teams will need to determine to what degree they can extend their existing workflows in ways that incorporate AI assistants versus having to replace their existing platforms. After all, the debate has now moved beyond determining what the benefits of AI are going to be, to address the actual cost of making that transition.