Generative AI

Phishing Emails: How to Spot and Prevent Scams in the Age of GenAI


How Are Threat Actors Using Generative AI?

“We were able to trick a generative AI model to develop highly convincing phishing emails in just five minutes,” Carruthers notes in an IBM SecurityIntelligence blog post. “It generally takes my team about 16 hours to build a phishing email, and that’s without factoring in the infrastructure setup. So, attackers can potentially save nearly two days of work by using generative AI models.”

Between these time savings and the email personalization generative AI allows for, threat actors are leveraging ChatGPT, WormGPT and other AI as a Service products to create new phishing emails at a rapid pace. This enables them to attack widely with a greater frequency and more success. This technology can also send customized phishing emails to a specific group of people, a tactic specifically useful for spear phishing.

This is a big reason 98 percent of senior cybersecurity executives say they’re concerned about the cybersecurity risks posed by ChatGPT, Google Gemini (formerly Bard) and similar generative AI tools. But AI is merely a tool. Just as it can be used to improve phishing email attacks, it can be used to better defend against them.

FIND OUT: What is consent phishing and how can businesses prevent it?

How Can You Protect Against These New Attacks?

As phishing email attacks continue to evolve, security leaders must improve their defenses accordingly. According to a recent study, over half of IT organizations rely on their cloud email providers and legacy tools for security and are confident these and other traditional solutions will be able to detect and block AI-generated attacks. These protections help, but the best defense against AI is AI.

Check Point lists three main benefits of using AI for email security: improved threat detection, enhanced threat intelligence and faster incident response.

AI can identify phishing content with a range of techniques, including behavioral analysis, natural language processing, attachment analysis, malicious URL detection, threat intelligence and incident response.

UP NEXT: How to avoid becoming the target of a phishing email.

In addition to AI security defenses, businesses also must implement security training to reduce the likelihood of human error. This means educating employees on what generative AI-based phishing attacks look like, from telltale stylistic patterns to typical grandiose promises, explains Glenice Tan, cybersecurity specialist at the Government Technology Agency, in a Wired article.

“There’s still a role for security training,” she says. “Be careful and remain skeptical.”



Source

Related Articles

Back to top button