How Generative AI can both harm and heal cybersecurity
OpenAI’s latest release of its famed generative AI (GenAI) platform GPT-4o is smarter than ever. But as we express awe at GPT-4o, hackers are probably busy finding ways to use it for their nefarious purposes. In fact, researchers working with its previous release, GPT-4, found that it could exploit 87 percent of one-day vulnerabilities.
One-day vulnerabilities are those that have a fix available, but sysadmins haven’t yet applied them to machines, leaving them vulnerable. Unsurprisingly, exploiting such vulnerabilities is one of the most popular means for hackers to break into computers.
Worryingly, the study shows that not only can GPT-4 exploit such systems, it can do so autonomously. While such use of GenAI as an attack vector hasn’t yet been reported in the real world, GenAI has already become a headache for cybersecurity professionals.
GenAI-powered cyberattacks
Sharef Hlal, Group-IB’s head of digital risk protection analytics team, Middle East and Africa, says GenAI has already been exploited by cybercriminals as a weapon. “Generative AI, while a remarkable tool, carries a dual nature in the realm of cybersecurity,” says Hlal.
Read: Generative AI tools like ChatGPT are writing papers, raising integrity concerns
Mike Isbitski, director of cybersecurity strategy at Sysdig, agrees. “From a security point of view, (GenAI) is absolutely more of a nuisance — threat actors only need to find a single vulnerability to access an environment where they can then quickly move laterally with the help of GenAI,” says Isbitski.
He explains that much of the cloud landscape is homogeneous, built on similar public images and infrastructure. It is this homogeneity, Isbitski argues, that enables attackers to automate many of their processes, from reconnaissance to actually carrying out an attack itself.
On the other hand, Hlal says scammers also leverage AI advancements to refine their deceitful schemes. This, he says, is evidenced by the number of compromised ChatGPT credentials flooding the dark web. “The staggering increase in compromised hosts accessing ChatGPT indicates a concerning trend,” says Hlal.
Social engineering is one of the areas where attackers are using GenAI. Isbitski says the technology helps attackers improve email phishing campaigns, as well as deep fakes that are used to convince victims to surrender something of value.
“Consider the recent high-profile use of AI for the fake Joe Biden robocall in New Hampshire meant to disrupt and depress voting — there is no shortage of publicly available, easy-to-use AI tools that allow even the least technical actors to dupe unsuspecting people into giving up the keys to the castle,” says Isbitski.
Unfortunately, Hlal believes the use of AI for cyberattacks will only scale up. He expects cybercriminals will refine their tactics, either making current schemes more effective or bringing in even more innovation.
Time to turn the tables
But it’s not all doom and gloom. “To the same extent that threat actors can automate their processes, security professionals can leverage GenAI to thwart them,” says Isbitski.
Read: Why businesses are approaching generative AI with caution
He says there are a handful of primary security use cases where GenAI can be of use to security professionals.
For instance, there’s system hardening, which can be accomplished through as-code approaches in modern architectures, says Isbitski. “And GenAI is adept at handling all types of code faster than humans.”
Similarly, GenAI can help contextualize the risk. Security vulnerabilities usually pile up faster than many security teams can address them, explains Isbitski. “GenAI is another ideal fit here, helping contextualize the actual risk based on other factors like what’s in use, what’s exposed, and what’s the criticality relative to other issues in the environment,” he says.
Hlal, too, believes AI marks a significant turning point in cybersecurity. While not a panacea, he says, it revolutionizes defense mechanisms by enhancing human expertise. But at the end of the day, “the success of leveraging AI hinges on adept navigation by companies,” believes Hlal.
He argues that all things considered, the debate surrounding AI’s impact on security transcends the technology itself. This, Hlal says, necessitates a holistic approach, emphasizing responsible usage and ethical implementation.
“While AI algorithms demand human intervention for civic innovation, they also mandate stringent safeguards against malicious exploitation,” says Hlal. “Thus, the focus should not solely be on the technology’s potential, but rather on how we wield it for societal betterment, ensuring it doesn’t become a tool for nefarious activities.”
For more technology news, click here.