Generative AI

Pulumi Adds Generative AI CoPilot to Manage Cloud Infrastructure


Pulumi today added a public beta of a Copilot tool to its platform for managing cloud infrastructure that makes use of generative artificial intelligence (AI) to automate a range of DevOps tasks.

Based on a large language model (LLM) developed by OpenAI, Pulumi Copilot is trained using more than 10,000 prompts created based on the Pulumi data model and the REST application programming interfaces (APIs) exposed by cloud service providers.

Pulumi CEO Joe Duffy said it has as a result a semantic understanding of more than 160 cloud computing environments to provide relevant and contextual responses based in usage patterns to natural language queries. In effect, Pulumi Copilot functions much like an engineer who has been added to a DevOps team.

For example, Pulumi Copilot can find resources in a cloud environment that the AI assistant can then use to generate additional code or documentation, troubleshoot issues or diagnose compliance and security errors. In the near future, those AI assistants rather than merely responding to a prompt will be able to automatically perform an action on behalf of a DevOps team, noted Duffy.

Initially, Pulumi Copilot is available for free during the public beta but in time the company will adopt a pricing model for it that will be similar to the one used by GitHub, said Duffy. Organizations that are Pulumi Enterprise customers, for example, will have full access.

As generative AI assistants become more available, the amount of time currently required to onboard an engineer to a DevOps team should substantially decline. Today it can take several months for a new member of a DevOps team to become effective.

Orchestrating AI Assistants

The next major challenge will be orchestrating all the AI assistants that might become part of a DevOps workflow. As each provider of a DevOps tool or platform makes available AI assistants a need to orchestrate the assignment of tasks to them will emerge. There might, for example, eventually be a need for a primary AI agent that manages the workflows assigned to AI assistants who have been trained to perform a narrow range of tasks.

Regardless of approach, much of the drudgery that often conspires to make managing DevOps workflows a tedious process may soon be eliminated. The challenge and the opportunity now will be reforming DevOps teams that will soon be made up of human engineers and their various AI assistants.

In the meantime, DevOps engineers should start creating an inventory of their least favorite tasks with an eye toward eventually assigning them to an AI assistant to perform.

It’s not likely AI assistants will replace the need for DevOps engineers any time soon. It’s generally expected thanks to advances in AI that more software will be created and deployed in the next few years than what was previously deployed in the past two decades. There will always be a need for software engineers to supervise application environments where the number of dependencies that exist between applications will require some all too human intuition to troubleshoot. The only difference is instead of requiring a small army of software engineers, a smaller team will be able to manage applications at a level of scale that not too long ago would have seemed unimaginable.



Source

Related Articles

Back to top button