Generative AI

DoubleVerify report: Ad fraud schemes using generative AI will increase in scale, sophistication


Ad fraud often follows the flow of advertising dollars, but researchers now suggest scammers are also now following another industry trend: adoption of generative AI.

Generative AI is helping scale sophisticated ad fraud efforts across growing online ad platforms including connected TV and streaming audio, according to a report from DoubleVerify. The measurement giant reported that generative AI contributed to 23% growth in new fraud schemes in 2023, leading to a 58% increase in ad fraud on streaming platforms. The report, released Monday, analyzed 1 trillion impressions for 2,000 brands in 100 markets around the world and includes data about ads shown on desktop, mobile web, mobile app and connected TV platforms. Researchers also found evidence of a 269% increase in existing bot fraud schemes while incident types included bot fraud, site fraud, app fraud, hijacked devices, non-human data center traffic and injected ad events.

Generative AI has made it easier to falsify data patterns and make fraudulent traffic look more human. The proliferation of AI-generated content also helps fraudsters create new shell company websites, publish new apps and write fake reviews. That’s been especially key for cases in the mobile app space. Generative AI has also helped CTV ad scams evolve three times faster in 2023 than in 2022, according to Jack Smith, DoubleVerify’s chief innovation officer. The trend is expected to continue through 2024 as generative AI tools become more capable, more ubiquitous and easier to use.

“It’s not just a problem for advertisers, it’s a problem for publishers as well,” Smith told Digiday. “Because it’s funneling money away from legitimate traffic. So if you’re spoofing a well known app, then you’re basically siphoning money out of the ecosystem. It has a negative effect for both publishers and advertisers.”

Two examples of scams using generative AI are FM Scam and CycloneBot, according to DoubleVerify, which unmasked both scams earlier this year. CycloneBot, a CTV-focused scam, is harder to detect because it uses AI to create longer viewing sessions on fake devices and quadruple the volume of traffic compared to older scams. Another scam, FM Scam, targets streaming audio and uses AI to generate fake audio traffic that looks like legitimate user activity across various devices and audio players. DoubleVerify estimated the tactic was used to spoof 500,000 devices in March 2024 and is the first instance the company has found of ad fraud targeting smart speakers.

The tactics can be costly for advertisers. DoubleVerify said FM Scam is part of a larger audio ad fraud scheme called BeatSring, which is estimated to siphon traffic surpassing $1 million a month. Meanwhile, researchers say CycloneBot has been used to fake up to 250 million ad requests and spoof around 1.5 million devices each day, costing advertisers as much as $7.5 million a month.

The findings come as AI is creating more concern with made-for-advertising content. In a separate report, DoubleVerify said generative AI has led to a nearly 20% increase in online MFA content, with growth largely driven by lower-tier MFA impressions on sites with both MFA and non-MFA qualities. When DoubleVerify and Sapio Research surveyed 1,000 global advertisers, 57% said they saw AI-generated content as a challenge for the digital ecosystem while 54% said generative AI has degraded media quality.

AI-generated reviews can make mobile apps look like they’re of interest to users while also making harmful apps look more legitimate during investigations. That’s led to DoubleVerify doubling its investigations into potentially fraudulent apps in the past year. Examples from Smith include classic screen saver apps and others like an app where people leave their TV on for their pets.

And how might Apple’s newly announced AI tools for mobile apps and Siri further complicate the ecosystem? It’s too soon to tell, said Smith, who was previously global chief product officer of investment at GroupM, the WPP-owned media agency. However, he thinks it will depend on the implementation, adding that each fraud scheme depends on different mechanisms.

“We’ll start looking at what the effect is of any changes in any operating system and any connectivity or data sharing between those or fake apps,” Smith said. “We’re looking at all these things, but we really want to see what it’s like in production before we say if it might have the potential to do something.”

Misinformation flourishing alongside AI ad fraud

Privacy experts and ad fraud researchers say they’re not surprised to see generative AI enabling increased ad fraud. Other organizations researching generative AI’s impact on ad fraud and other issues like misinformation have continued raising warnings about impending issues. At NewsGuard, researchers have been analyzing not just what generative AI means for ad fraud but also how online advertising can fuel misinformation by bad actors.

“We said more than a year ago that AI was going to become a force multiplier when it comes to misinformation and disinformation,” said Steven Brill, co-founder of Newsguard. “This impressive, comprehensive study by DV documents that threat in spades, as do the reports we’ve done showing, for example, that one Russian disinformation operative out of Moscow is apparently responsible for 167 different hoax websites posing as real news sites and promoting content aimed at disrupting the American elections.”

Nearly 75% of misinformation websites relied on advertising, according to a newly published study by researchers at Newsguard, Stanford University and Carnegie Mellon. The findings, published last week in the journal Nature, also found between 46% and 82% of common advertisers inadvertently had ads served on misinformation sites. Researchers advised addressing the issue with online platforms increasing ad transparency, but that’s something advertisers and consumers have spent years requesting without much progress.

DoubleVerify says its own AI systems help it detect ad fraud schemes, but others doubt a surefire fix. Measurement firms have also recently faced more scrutiny over conflicts of interest when it comes to being paid by the same companies they measure and rate. Fighting harmful AI with “good AI” is also an “irresponsible false promise,” said Arielle Garcia, director of intelligence at Check My Ads, a nonprofit watchdog organization.

She also cited recent controversy after DoubleVerify gave X a 99.99% brand safety and “repeated failures to detect or action invalid traffic” such as the Forbes-operated MFA subdomain.

“With GenAI, the garbage websites that DV is paid on volume to monitor are infinite, so one can imagine them and other legacy third party verification vendors salivating at the opportunity,” said Garcia, who was previously chief privacy officer at Universal McCann. “If DV put as much effort into its product as it did into these sales pitch studies, they might actually have a value proposition.”



Source

Related Articles

Back to top button