N.A. Developers Optimistic About Generative AI and Code Security
Developers in North America are more likely than their counterparts in other regions to see generative AI as a tool that can improve the security of the code they’re writing, according to a report by market research firm Evans Data Corp.
The company’s most recent Global Development Survey found that 37.6% of programmers from North America said they expect the emerging technology will improve code security, a higher percentage than 31.3% of developers in South America. In the Europe, Middle East, and Africa, that figure came in a 30.7%; for the Asia-Pacific region, it was 30.1%
The rapid innovation and adoption of generative AI among enterprises, smaller companies, and consumers has come with almost as many worries about security and privacy as it has with the promise of vast benefits in everything from how businesses run to how people interact with the various devices in their homes.
The story for developers is no different. There are myriad generative AI products aimed at developers, from OpenAI’s ChatGPT chatbot – which helped kick off the land rush that is generative AI when it was released in November 2022 – and GitHub Copilot to Google’s PaLM 2, Cohere Generate, and Anthropic’s Claude, all of which can be used to help generate code.
“Developers have relied on machine intelligence for years: automation, code completion, low-code development, static code analysis, and the like,” cybersecurity company Trend Micro write. “But generative AI marks a distinct and major leap forward.”
Faster, More Efficient Development
For developers and their organizations, the technology can improve the efficiency and productivity of development workflows by automating coding tasks and providing real-time code suggestions, accelerates the time to market for products, and saves money. Generative AI also allows for natural language interfaces in development tools, improve code by identifying redundancy or inefficiency, and enhance documentation, according to IBM.
Global consultancy McKinsey and Company found in a study that developers can complete coding tasks twice as fast with generation AI.
Code Security Benefits
Such benefits can also include improving security in the development process.
“Generative AI can enhance code security by analyzing vast datasets to identify vulnerabilities and suggest patches,” Evans Data said in a statement. “It learns from historical security breaches to predict potential threats, automatically generates secure coding patterns, and provides real-time feedback to developers, significantly reducing the risk of security flaws in software applications.”
In addition, the technology can be used to automatically create test cases, which can help developers more quickly identify potential issues earlier in the development process, according to IBM. In addition, “by analyzing large codebases, generative AI can assist software development teams in identifying and even automatically fixing bugs. This can lead to more robust and reliable software, as well as faster development cycles.”
Nvidia, whose GPUs are cornerstones in many of the large language models (LLMs) that underpin generative AI tools, notes that the technology can create synthetic data to simulate previously unseen attack patterns, run safe simulated attacks to test defenses, and analyze vulnerabilities, a time-consuming task when done only by developers.
“An LLM focused on vulnerability analysis can help prioritize which patches a company should implement first,” the company wrote. “It’s a particularly powerful security assistant because it reads all the software libraries a company uses as well as its policies on the features and APIs it supports.”
There are Dangers
All that said, there are dangers and risks lurking in generative AI that developers need to keep in mind. The open-source OWASP organization in a report detail 10 types of vulnerabilities for AI apps built with LLMs, ranging from data leakage and prompt injections to insecure plug-ins, supply-chain risks, and misinformation or “hallucinations.”
Jacob Schmitt, senior technical content marketing manager for continuous integration and continuous development (CI/CD) platform maker CircleCI, noted the benefits that come with using generative “in the right way,” “the technology poses inherent risks that demand careful consideration and ongoing vigilance against the potential for introducing errors, security vulnerabilities, compliance issues, and ethical concerns into your code base.”
Schmitt noted such issues as poor or inefficient code quality that doesn’t meet a company’s standards and a lack of visibility into the generative AI-generated code that even it works, it might be difficult to understand the logic. Given that many LLMs are trained on both public and proprietary code, software created with generative AI may violate copyright laws or leak proprietary or sensitive information, violating regulations around data privacy and security.
In addition, AI models trained on large code repositories could including exploitable patterns or known vulnerabilities that could inadvertently find their way into a developer’s work. Schmitt also warned about increasing an organization technical debt, “the cumulative consequences of suboptimal design choices, shortcuts, or compromises made during development.”
“Accumulated technical debt can lead to decreased code maintainability, increased development time for future enhancements or bug fixes, and higher costs in the long run,” he wrote. “Crucially, the extent of the technical debt you are likely to accrue depends on how you deploy and integrate generative AI into your development workflow.”
Recent Articles By Author