Cybersecurity and fraud prevention need to join forces
So-called bad bots unleashed by cybercriminals now account for almost 75% of internet traffic, according to a recent study. Their top five attack categories: fake accounts, account takeovers, scraping, account management, and in-product abuse.
Gavin Reid is on the front line of this assault. He’s chief information security officer of HUMAN Security, which helps clients in a range of industries stop online fraud that’s often automated via bots.
For its customers, HUMAN distinguishes bad bots from good ones, which perform helpful tasks like customer service and content moderation. The bad guys are hogging the spotlight. Last year alone, Reid tells me, his New York–based company saw a fivefold jump in malicious bot activity.
That’s hurting businesses and brand trust.
“We’re seeing customers come to us because they’re getting fleeced by these bots,” says Reid, the CISO whose firm’s clients include Priceline, Wayfair, and Yeti. A typical scenario he hears: “I put out a new whatever to sell on my platform, and 80% of all the traffic were bots, and normal people couldn’t even get there.”
Thanks to generative AI, it’s easy for criminals to create bots that convincingly mimic humans online, Reid explains. That makes it “really, really hard for companies like us and people to defend their infrastructure from attacks and to enable users to buy stuff.”
There’s little about defending against automated attacks in any of the security compliance regimes that organizations follow, Reid says. That includes the security operations center (SOC)—the team responsible for detecting, analyzing, and responding to cyber threats—and International Organization for Standardization (ISO) guidelines.
“I feel like we’re in a bit of a gap,” Reid says. “And when we have a gap, then miscreants take advantage of that and use it against us.”
Mistrust within companies could be making the problem worse.
In some businesses, cybersecurity and fraud prevention are still siloed. That doesn’t add up for Reid, who points out that times have changed.
“Let’s face it: Financial fraud—or whatever business fraud—most of it is happening online,” he says. “So having these groups separated out doesn’t help at all.”
Then why does it persist?
“Usually, it has to do with political reasons and org structures, not what makes sense for solving this particular problem,” Reid says.
The divide is more common among older organizations, he notes. For example, the big U.S. banks typically have separate fraud and cyber divisions. That’s because they started out with teams that handled old-school crimes like stickups and check fraud, then later launched cybersecurity groups to combat online offenses such as hacking, phishing, and ransomware.
But the wall is coming down. Most large financial institutions now operate a “fusion center” that sees both sides join forces, Reid says. “It’s continuing to merge, but it’s happening slowly.”
For businesses seeking a more collaborative cybersecurity and fraud strategy, Reid suggests following the banks’ lead. “It’s like they’re getting into the pool together,” he says of the two departments. “So they can keep their structure, they can keep the politics, but the actual people that are dealing with the day-to-day issues can work very closely together.”
A second step: “Single leadership that would be responsible for the delivery of both,” ensuring shared access to tools and capabilities, Reid says.
No if, ands, or bots about it.
Nick Rockel
nick.rockel@consultant.fortune.com
IN OTHER NEWS
A Swift buck
Swifties have good reason not to take that coveted concert ticket at face value. U.K. bank Lloyds just warned customers that it’s seen a surge in ticketing scams involving Taylor Swift’s upcoming shows. British fans’ estimated losses since last July: £1 million ($1.25 million). More than 600 Lloyds clients have complained of being duped, mostly via Facebook. Talk about bad blood.
Fashion victims
So what else is new? Once again, fast-fashion giant Shein stands accused of copyright infringement. A U.S. class-action lawsuit alleges that the Chinese company used electronic monitoring and AI to scour the internet for popular designs, then stole them from artists to make its products. It’s not a good look for Shein, which is also under fire for treating workers poorly and running an environmentally unsustainable business.
Mind the gap
Unethical use of AI could stymie its funding and development, reckons Paula Goldman, chief ethical and humane use officer at Salesforce. “It’s possible that the next AI winter is caused by trust issues or people-adoption issues with AI,” Goldman said at a recent Fortune conference in London. To build workers’ trust in AI tools, she called for “mindful friction”—checks and balances so they do more good than harm. Let’s hope that isn’t as uncomfortable as it sounds.
Flight risk
Boeing’s trust woes continue. Whistleblower Sam Salehpour, a quality engineer for the aviation titan, told a Senate hearing that managers blew off his repeated warnings of safety problems. “I was told, frankly, to shut up,” said Salehpour, who said he witnessed gaps between aircraft fuselage panels that could put Boeing passengers in danger. Inspection documents confirmed those sightings to be the plane truth.
TRUST EXERCISE
“Businesses are eager to capitalize on the power of generative AI, but they are wrestling with the question of trust: How do you build a generative AI application that provides accurate responses and doesn’t hallucinate? This issue has vexed the industry for the past year, but it turns out that we can learn a lot from an existing technology: search.
By looking at what search engines do well (and what they don’t), we can learn to build more trustworthy generative AI applications. This is important because generative AI can bring immense improvements in efficiency, productivity, and customer service—but only when enterprises can be sure their generative AI apps provide reliable and accurate information.”
Generative AI’s tendency to hallucinate—in other words, deliver false or misleading information—is a trust buster for companies. Sridhar Ramaswamy, CEO of cloud computing firm Snowflake, offers a way forward. To solve that trust problem, Ramaswamy suggests, combine the best qualities of search engines with AI’s strengths.
Unlike large language models (LLMs), search engines are good at sifting through mountains of information to identify high-quality sources, he notes. Ramaswamy envisions AI apps emulating those ranking methods to make their results more reliable. That would mean favoring company data that’s been accessed, searched, and shared most often, as well as sources considered trustworthy.
It helps to think of LLMs as interlocutors rather than sources of truth, Ramaswamy argues. GenAI may be a smooth talker, but its words need more substance.