Generative AI

Trend Micro highlights how generative AI is influencing cyber fraud


Trend Micro, a cybersecurity solutions provider, detailed how generative AI (GenAI) is transforming cyber fraud techniques, particularly phishing and impersonation using deep fakes. 

Trend Micro’s latest research underscores the increasing use of artificial intelligence (AI) in attacks against models and applications, as well as in the creation of advanced phishing tactics.

“As global organizations deploy GenAI and other digital tools to gain competitive advantages, the importance of a unified cybersecurity strategy has never been greater,” said David Ng, managing director for Singapore, the Philippines, and Indonesia, Trend Micro. “Enhancing visibility, managing attack surface risk, and securing AI usage are key pillars of this strategy.”

Key developments include the rise of “Criminal GPT” services, which utilize GenAI for prompt injection and jailbreaking. These sophisticated methods enhance the effectiveness of phishing and business email compromise (BEC) attacks by improving content accuracy and evading traditional detection mechanisms.

Another concern is the anticipated decline of conventional phishing awareness training. The report suggests that new innovations in email and extended detection and response (XDR) technologies are necessary to combat these evolving threats. Defenders will need to go beyond traditional gateway protections to stay ahead of attackers.

Large-scale information stealing

Adversaries are increasingly leveraging GenAI for large-scale information stealing and offering reconnaissance-as-a-service (ReconaaS). A business impact scenario highlighted in the report describes a BEC attack where deepfake technology is used to confirm payment verification through a live phone call, leading to the fraudulent release of funds.

GenAI’s influence on cyber fraud extends to manipulating large language models (LLMs) through prompt injection, where malicious inputs cause unintended actions, and jailbreak prompts, which override application policies. Also, poisoning attacks — where adversaries introduce corrupted data into a model’s training pool — are becoming more prevalent.

This evolving landscape underscores the need for robust, innovative defense mechanisms to protect against AI-enhanced cyber threats.



Source

Related Articles

Back to top button