Generative AI

AI-generated code potentially used in new Rhadamanthys campaign


AI-generated malware may have been used in a recent campaign to spread the Rhadamanthys infostealer, researchers revealed Wednesday.

A PowerShell script used to decode the base64-encoded stealer and execute it in memory was found to contain unusually detailed comments, which is a potential sign that the code was generated using a large-language model (LLM) such as OpenAI’s ChatGPT, Google’s Gemini or Microsoft’s Copilot, the Proofpoint researchers wrote in a blog post.

“Specifically, the PowerShell script included a pound sign followed by grammatically correct and hyper specific comments above each component of the script,” the blog stated.

The script and accompanying Rhadamanthys payload were found to be part of a malicious email campaign by a threat actor and suspected initial access broker (IAB) known as TA547. The campaign targeted dozens of German businesses with emails impersonating a retail company called Metro.

The phishing emails contained a .LNK file attachment disguised as an invoice that, when executed, would cause the remote PowerShell script to deploy Rhadamanthys. Rhadamanthys is an information-stealing malware first discovered in August 2022 that has been used in various phishing campaigns, but this is the first time it was used by TA547, according to Proofpoint.

While the PowerShell script that dropped Rhadamanthys in this campaign is suspected to be AI-generated, the stealer payload itself was unchanged, the researchers noted.

SC Media tested ChatGPT (GPT-4), Gemini (free version) and Copilot’s PowerShell script generation capabilities and found that all three LLMs had a similar pattern of including detailed comments for each section of generated code.

We also submitted the code example published in a screenshot by Proofpoint into two popular AI content detectors — Copyleaks and GPTZero — which both predicted the example was mostly human-created. The ChatGPT, Gemini and Copilot-generated PowerShell examples were also submitted to these tools, which predicted they were mostly AI-generated.

A screenshot of GPTZero results for the TA547 PowerShell sample.

“While it is difficult to confirm whether malicious content is created via LLMs — from malware scripts to social-engineering lures — there are characteristics of such content that points to machine-generated rather than human-generated information. Regardless of whether it is human or machine-generated, the defense against these threats remain the same,” the Proofpoint researchers wrote.

The use of LLMs by cyber threat actors continues to evolve, with early examples being jailbreak prompts and cybercrime-specific LLMs like WormGPT and FraudGPT.

Phishing campaigns using AI-generated text for social engineering have already been observed in the wild, but there’s also evidence that threat actors are using LLM tools for a range of other tasks.

Microsoft researchers revealed in February that five nation-state threat actors from Russia, North Korea, Iran and China had used ChatGPT for scripting, target reconnaissance, translations and more.

Deepfakes are another type of generative AI that is in active use by cybercriminals. These audio and video imitations have been used to target businesses including a multinational Hong Kong-based company that lost $25 million when an employee was fooled by a deepfaked conference call of several colleagues.  

Password management company LastPass was also the target of a recent deepfake audio scam over WhatsApp, which failed to trick the targeted employee. The company subsequently published a blog post about the incident to educate others about the threat.



Source

Related Articles

Back to top button