Generative AI

IRONSCALES Applies Generative AI to Phishing Simulation


IRONSCALES has made generally available a phishing simulation tool that makes use of generative artificial intelligence (AI) to enable cybersecurity teams to create as many as 2,000 simulations of a spear phishing attack in less than an hour.

The Phishing Simulation Testing platform from IRONSCALES now also provides a set of templates and streamlined workflows for setting up phishing campaigns using a set of large language models (LLMs) that the company has trained.

IRONSCALES CEO Eyal Benishti said it’s already apparent that cybercriminals will be using generative AI tools to launch more sophisticated phishing attacks at levels of unprecedented volume. As such, cybersecurity teams will need tools and platforms that enable them to fight fire with fire, he added.

It’s not clear how many phishing simulations an organization should be creating but there’s no doubt training will need to improve. It may be more difficult than ever to detect a phishing campaign crafted using a generative AI tool. However, there is usually some sense of urgency being created that end users should learn to be wary of, as the number of phishing emails continues to increase. For example, an email asking an end user to create an exception to a process even when it appears to have come from a CEO or CFO should automatically require additional confirmation.

That issue will become even more problematic when phishing attacks include deep fakes that incorporate audio and video files that appear to be from someone an end user might implicitly trust, noted Benishti.

The one thing that is certain is the Nigerian Prince era of phishing attacks that could be easily detected because of errors in grammar or misuse of colloquial phrases is now all but over. Cybercriminal gangs will no longer have to spend nearly as much time and effort honing content as they once did. The overall cost of launching a phishing campaign in the age of generative AI may soon approach zero.

AI tools, hopefully, will make it easier to identify anomalies indicative of a phishing campaign that most end users would not initially spot. However, there will inevitably be phishing campaigns that by dint of sheer volume will make it into an email inbox. The challenge then becomes providing the level of training required to make sure end users remain constantly suspicious of any request that requires them to perform a task outside of a normal workflow. Even then, end users should ensure that additional communications, that confirm the identities of all parties involved in those workflows, are made using multiple mediums.

There will inevitably come a day when a phishing attack fueled by generative AI results in an all-but-inevitable breach. The challenge and the opportunity now is to define a set of processes that minimize the potential blast radius of a breach. After all, it’s not really fair to terminate employees for making a mistake in the absence of any real training, when the tools needed to provide it are more accessible than ever.

Recent Articles By Author



Source

Related Articles

Back to top button