Generative AI

Tricentis Taps Generative AI to Automate Application Testing


Tricentis this week added a generative artificial intelligence (AI) capability, dubbed Tricentis Copilot, to its application testing automation platform to reduce the amount of code that DevOps teams need to manually create.

Based on an instance of the large language model (LLM) created by Open AI that has been deployed on the Microsoft Azure cloud, the first iteration of what will eventually become multiple AI assistants is Tricentis Testim Copilot, which makes it possible to use natural language to describe a test that is then automatically generated in JavaScript. Additional Tricentis Copilot solutions for the Tricentis Tosca and Tricentis qTest platforms will be added later this year.

Mav Turner, chief product and strategy officer for Tricentis, said the user interfaces that the company created for each platform will also continue to evolve to make it simpler to launch the appropriate prompts during any workflow.

The goal is to make it easier to create tests that as more are run will improve application quality, said Turner. For example, organizations that have been using the beta editions of the Tricentis Copilot have already seen a 20% to 50% increase in the number of tests being created, while at the same time reducing test failure rates by 16% to 43% so far, he added.

There’s no doubt that generative AI makes it substantially easier to create tests in ways that should enable DevOps teams to run more tests across the entire software development life cycle (SDLC). Rather than always having to wait for dedicated testing teams to write the code required, it will be increasingly feasible for any member of a DevOps team to generate a test, including application developers, who should be testing code as it is iteratively created.

Generative AI should also make it easier for DevOps teams to reuse tests once created and understand how they are being run using the summarization capabilities enabled by generative AI platforms that will also be able to provide recommendations to improve testing code. Collectively, those capabilities will make it possible to run tests faster, have fewer errors, reduce costs and increase productivity.

Those capabilities can all be provided in a way that doesn’t result in any of the testing code being used to later train any update to the core LLM provided by OpenAI, noted Turner. Longer term, Tricentis will continue to research which LLMs might be best applied to any given use case in a way that is transparent to a DevOps team that might not be interested in which LLM is being employed to automate a task. Currently, however, it doesn’t make much economic sense for Tricentis to build its own LLM, said Turner.

It’s not clear yet to what degree generative AI will democratize application testing, but as they become easier to create there should be more time to run more complex tests that might, for example, address cybersecurity issues. Most of the tests that are run today typically surface common programming mistakes. However, as testing becomes faster in the age of AI there should be more time to run a wider range of tests to ensure the best application experiences possible.



Source

Related Articles

Back to top button