Fintech

Organisations Remain Unprepared to Fight AI-Driven Fraud, Signicat Reveals


Fraud prevention decision-makers across Europe are well aware of the growth and danger of AI-driven identity and financial fraud, but are unprepared to combat it, Signicat, the European digital identity and fraud prevention solution provider, has revealed in a new report.

The Battle against AI-driven Identity Fraud‘ study by Signicat delves into how organisations across Europe are battling the growing threat of AI-driven identity fraud. It asked banks, insurance providers, payment providers and fintechs about their experience, how AI is changing fraud, and whether they are prepared to fight it.

Over a thousand fraud decision-makers across Belgium, Germany, the Netherlands, Norway, Spain, Sweden, and the UK took part in the research.

Signicat found that around a third of AI-driven fraud attempts are successful: 42.5 per cent of fraud attempts detected use AI, as estimated by respondents, with 29 per cent of them considered to be successful.

One in nine said that estimated AI usage in fraud attempts is as high as 70 per cent for their organisation, while 38 per cent of revenue loss to fraud is estimated to be due to AI-driven attacks.

Organisations are unsure how to combat AI fraud

Most fraud decision-makers agreed that AI is a major driver of identity fraud (73 per cent), that AI will enable almost all identity fraud in the future (74 per cent), and that AI will mean more people will fall victim to fraud than ever before (74 per cent).

Asger Hattel, CEO of SignicatAsger Hattel, CEO of Signicat
Asger Hattel, CEO of Signicat

However, many organisations remain unprepared for the growing threat of AI-driven fraud, explains Asger Hattel, CEO of Signicat: “Fraud has always been one of our customers’ biggest concerns, and AI-driven fraud is now becoming a new threat to them. It now represents the same amount of successful attempts as general fraud, and it is more successful if we look at revenue loss.

“AI is only going to get more sophisticated from now on. While our research shows that fraud prevention decision-makers understand the threat, they need the expertise and resources necessary to prevent it from becoming a major threat. A key part of this will be the use of layered AI-enabled fraud prevention tools, to combat these threats with the best technology offers.”

Taking action against AI

AI is enabling more sophisticated fraud, at a greater scale than ever seen before. Even if fraud success rates remain the same, the sheer volume of attempts means that fraud levels are set to explode, Signicat explained.

The digital identity firm revealed that account takeover attacks are the most popular type of fraud, often taking advantage of weak or reused passwords. Deepfakes, often used to impersonate the holder of an account rather than creating a new or synthetic identity, are far more popular, accounting for one in 15 fraud attempts.

Despite the fact that organisations generally understand the scale of the damage AI-driven fraud can inflict, most do not know which techniques and technologies will help them the most. Signicat also explained that many firms have plans in place with implementation timescales mostly in the next twelve months – showing a slowness to adapt to evolving fraud types.

David Birch, director of Consult Hyperion, commented: “It is essential that financial firms have a robust strategy for AI-driven identity fraud. Identity is the first line of defence. Identity systems must be able to resist and adapt to ever-changing fraud tactics, to protect legitimate customers and ensure the reputation of the service.”

A layered approach could be key to staying ahead of AI-driven fraud, says Signicat. The firm itself offers data enrichment and verification solutions, as well as ongoing identity monitoring to ensure that no fraud is committed after the customer sign-up process.



Source

Related Articles

Back to top button