Code faster with generative AI, but beware the risks when you do
Developers now can turn to generative artificial intelligence (GenAI) to code faster and more efficiently, but they should do so with caution and no less attention than before.
While the use of AI in software development may not be new — it’s been around since at least 2019 — GenAI brings significant improvements in the generation of natural language, images and — more recently — videos and other assets, including codes, said Diego Lo Giudice, Forrester vice president and principal analyst.
Also: Why the future must be BYO AI: Model lock-in deters users and stifles innovation
Previous iterations of AI were used mostly in code testing, with machine learning leveraged to optimize models of a test strategy, Giudice told ZDNET. Applied across all these use cases, GenAI can go beyond supporting decision-making and improving code generation.
According to Giudice, GenAI offers access to an expert peer programmer or specialist (such as a tester and business analyst) along the development lifecycle that can be queried interactively to find information quickly. GenAI also can suggest solutions and test cases.
“For the first time, we are seeing significant productivity gains that traditional AI and other technologies have not provided us with,” he said.
AI can be tapped across the entire software development lifecycle, with a dedicated “TuringBot” at each stage to enhance tech stacks and platforms, he noted.
TuringBots, a term created by Forrester, are defined as AI-powered software that help developers build, test, and deploy codes. The research firm believes TuringBots will drive a new generation of software development, where they can assist at every stage of the development lifecycle including looking up technical documentation and auto-completing codes.
“Analyze/plan TuringBots,” for instance, can facilitate the analysis and planning phase of software development, Giudice said, pointing to OpenAI’s ChatGPT and Atlassian Intelligence as examples of such AI products. Others, such as Google Cloud’s Gemina Advanced, can generate designs of microservices and APIs with their code implementation, while Microsoft Sketch2Code can generate from hand-written sketched UI working code, he said.
Also: Implementing AI into software engineering? Here’s everything you need to know
He added that “coder TuringBots” currently are the most popular use case for GenAI in software development, where they can generate codes from prompts as well as from code context and comments via autocompletion for popular IDEs (integrated development environments). These include common languages such as JavaScript, C++, Phyton, and Rust.
A big draw of generative models is that they can write codes in many languages, allowing developers to input a prompt and generate and refactor or debug lines of codes, said Michael Bachman, Boomi’s head of architecture and AI strategy. “Essentially all humans interacting with GenAI are quasi and senior developers,” he said.
The software vendor integrates GenAI into some of its products, including Boomi AI, which translates natural language requests into action. It can be used to design integration processes, APIs, and data models to connect applications, data, and processes, according to Boomi.
The company uses GenAI to support its own software developers, who keep a close watch on the codes that run its platform.
Also: Can AI be a team player in collaborative software development?
“And that is the key,” Bachman said. “If you are using GenAI as the primary source for building your whole application, you are probably going to be disappointed. Good developers use GenAI as a jumping-off point or to test failure scenarios thoroughly, before putting code into production. This is how we deal with that internally.”
His team also works to build features to meet their customers’ “practical AI objectives.” For example, Boomi is creating a retrieval system because many of its clients want to replace keyword searches with the ability to look up content, such as catalogs on their websites, in a natural language.
GenAI also can be used to remediate security, Giudice said, where it can look for vulnerabilities in AI-generated codes and offer suggestions to help developers fix certain vulnerabilities.
Compared to traditional coding, a no- or low-code development strategy can offer speed, built-in quality, and adaptability, said John Bratincevic, principal analyst at Forrester.
Also: Beyond programming: AI spawns a new generation of job roles
It also provides for an integrated software development lifecycle toolchain and access to an expanded talent pool that includes non-coders and “citizen developers” outside the IT community, Bratincevic said.
Organizations may face challenges, however, related to the governance of large-scale implementation, especially with managing citizen developers who can number in the thousands, he cautioned. Pricing also can pose a barrier as it is typically based on the number of end users and this can limit adoption, he said.
And while GenAI or AI-infused software assistants can enable junior professionals to fill talent gaps, including in cybersecurity, Giudice said an expert eye review is still necessary for all these tasks.
Bratincevic concurred, stressing the need for developers and other roles within the software development lifecycle to review everything the platform generates or auto-configures through AI.
“We are not yet, and probably won’t ever be, at the point of trusting AI blindly for software development,” he said.
For one, there are security requirements to consider, according to Scott Shaw, Asia-Pacific CTO for Thoughtworks. The tech consultancy regularly tests new tools to improve its efficiency, whether it is in the IDE or to support how developers work. The company does so where it is appropriate for its customers and only with their consent, Shaw said in a video interview, noting that some businesses are still nervous about the use of GenAI.
Also: Hurtling toward generative AI adoption? Why skepticism is your best protection
“Our experience is that [GenAI-powered] software coding tools [currently] aren’t as security-aware and [attuned with] security coding practices,” he said. For instance, developers who work for organizations in a regulated or data-sensitive environment may have to adhere to additional security practices and controls as part of their software delivery processes.
Using a coding assistant can double productivity, but developers need to ask if they can adequately test the code and fulfill the quality requirements along the pipeline, he noted.
It cuts a double-edged sword: Organizations must look at how GenAI can augment their coding practices so the products they develop are more secure, and — at the same time — how the AI brings added security risks with new attack vectors and vulnerabilities.
Because it delivers significant scale, GenAI amplifies everything an organization does, including the associated risks, Shaw noted. A lot more codes can be generated with it, which also means the number of potential risks will increase exponentially.
Know your AI models
And while low-code platforms may be a good foundation for GenAI Turingbots to aid software development, Bratincevic noted that organizations need to know what large language models (LLMs) are used and ensure these align with their corporate policies.
He said GenAI players “vary wildly” in this aspect, and urged businesses to check the version and licensing agreement if they use public LLMs such as OpenAI’s ChatGPT.
Also: Yikes! Microsoft Copilot failed every single one of my coding tests
He added that GenAI-powered features for generating codes or component configurations from natural language have yet to mature. They may see increased adoption among citizen developers but are unlikely to impress professional developers.
Bratincevic said: “At the moment, a proven and well-integrated low-code platform plus GenAI is a more sensible approach than an unproven or lightweight platform that talks a good game on AI.”
While the LLMs carry out the heavy lifting of code writing, the human still needs to know what is required and provide the relevant context, expertise, and debugging to ensure the output is accurate, Bachman said.
Developers also need to be mindful of sharing proprietary data and intellectual property (IP), particularly with open-source tools, he said. They should avoid using private IP such as codes and financial figures to ensure they are not training their GenAI models using another organization’s IP or vice versa. “And if you choose to use an open-source LLM, make sure it is well-tested before putting it into production,” he added.
Also: GitHub releases an AI-powered tool aiming for a ‘radically new way of building software’
“I would err on the side of being extremely circumspect about the models that GenAI tools are trained on. If you want those models to be valuable, you have to set up proper pipelines. If you do not do that, GenAI could cause a lot more problems,” he cautioned.
It is early days and the technology continues to evolve; its impact on how roles — including software developers — will change remains is far from certain.
For example, AI-powered coding assistants may change how skills are valued. Will developers be deemed better because they are more experienced or because they can remember all the coding sequences, Shaw quipped.
For now, he believes the biggest potential is GenAI’s ability to summarize information, offering a good knowledge base for developers to better understand the business. They then can translate that knowledge into specific instructions, so systems can execute the tasks and build the products and features customers want.