Unlocking the powers of GenAI cybersecurity
David Mareels is excited about the prospects for generative AI. And the senior director of product management at Sophos is particularly enthused about how it might be applied to cybersecurity.
Generative AI, he tells Tech Monitor, is “a platform shift”, comparable in its disruptive potential to the birth of the internet, mobile, and cloud. He corrects himself: “I actually think it’s bigger than mobile and cloud, so it’s most comparable with the internet. Every company today is an internet company.” He pauses briefly. “In ten to 15 years’ time, every company will be a GenAI company.”
Why does any of this matter to the CISO and other security practitioners? His answer is detailed and nuanced, but perhaps is best encapsulated in this headline quote: “Think of GenAI as a super human analyst, with a brain the size of a planet that knows every bit of business data, every tool and how to interface with those tools. And it’s a chatbot, too. Now give that capability to the defender. That has never been seen before.”
Mareels joined Sophos two years ago as part of the acquisition of SOC.OS, the start-up company he co-founded. SOC.OS solves the problem of alert fatigue and limited visibility so prevalent in IT security environments. The acquisition has allowed Sophos to advance its Managed Detection and Response (MDR) and Extended Detection and Response (XDR) solutions for organisations of all sizes by including additional telemetry and context from alerts and events across dozens of third-party endpoint, server, firewall, identity and access management (IAM), cloud workload, email, and mobile security products. This is the work keeping Mareels busy and the prism through which he looks at security.
SecOps is fundamentally a data challenge
It is mid-May at DTX Manchester, the North’s biggest enterprise IT event. Mareels has recently come off stage having made the case for generative AI.
In his presentation – “Using GenAI to proactively strengthen your cyber security” – Mareels argues that, because SecOps is fundamentally a data challenge, those who don’t adopt generative AI will soon be at a competitive disadvantage. Apply it, he advises, to core capabilities such as data pre-processing, investigation and interpretation. And then apply it to other use cases where it makes sense.
To bring generative AI’s potential to life, he applies it to two critical steps of cyber security. Specifically, he highlights the “clean” and “correlate” stages. He characterises his first use case as “enrichment”, asking the AI not just to explain an attack but to classify it, give reasons for the classification, and set a confidence score.
His second use case, he concedes, “is not there yet”, but is an interesting foretaste of what is to come. In the use case he asks the generative AI to mimic the work of an analyst – from case creation to customer escalation – and come up with recommendations by understanding how to logically categorise attacks, and do so meaningfully across time. Early experimentation suggests the human analyst remains the better option by far. That could soon change, however.
The other factor that is helping make the case for generative AI is what Mareels describes as the talent war. In short, there isn’t enough cyber security expertise in the market and where there is, it is being snapped up by the those – including Sophos – providing MDR services. And this brings us back to Mareels’ second use case. Already, it can sensibly be applied to XDR (extended detect and response) and help a junior analyst assimilate relevant information. “It drops the adoption barrier,” he argued.
Levelling the defender-attacker playing field
After the talk, we pick over some of those themes and discuss when use cases – currently a series of well-engineered development projects – will become production- and customer-ready.
Back to the excitement first. Why does Mareels describe generative AI as a platform shift? Because, he says, it offers a step change in capabilities which, in turn, will have a dramatic effect on business models. Its mass adoption will cement its impact. Apply this to cyber security and, he says, it is likely to “tip the balance back in favour of the defenders over the attackers – or at least level the playing field”.
This is a bold claim, especially if we consider the history of cyber criminality. At every stage, bad actors have proved expert adopters of new technology. Why should generative AI be any different? Mareels offers two thoughts.
The first is from the defender perspective. Here, generative AI can help solve the people problem, he argues. “If you look at attackers versus defenders today, attackers are typically sophisticated, with all the tools they need at their disposal. On the defender side, by contrast, it is pretty bleak. You’ve got a big talent and resource capability gap. Technology is not the issue. Cyber security is rooted in a human problem: how do I deal with ten screens? How do understand the data coming from ten screens? Not only, how do I understand it day-to-day, but how do I extrapolate that understanding to my business context so I make good decision? GenAI can fundamentally solve those problems.”
He offers his second thought from an attacker’s point of view, for whom he thinks generative AI is likely to be of limited use. Mareels says an attacker essentially carries out two tasks: reconnaissance to gain access; followed by malevolent activity once inside an organisations network. Mareels concedes that generative AI could prove extremely useful in the reconnaissance stage. It can make social engineering, for example, far more effective. “A foreign speaker can now craft a perfectly-written phishing email in English,” he suggests.
So far so good for the bad actor. Once inside “the digital castle”, however, the cybercriminal will continue to be reliant on tools they have always used; the tools we use as organisations. He cites Sophos’ work in tracking popular ransomware attack techniques. “The top 10 tools haven’t changed that much over time.” This “bottleneck”, he says, is not going to be eased by generative AI. At least not yet. “Things may change once we see that it’s not PsExec [the command-line tool that allows users to run programs on remote systems], but some kind of GenAI power tool. But I haven’t seen anything yet.”
From use case to commercial reality
Mareels says use cases are useful because they help concentrate minds. They avoid, too, the scattergun approach that sometimes accompanies new technologies. Targeted examples are much more impactful.
When are we likely to see generative AI use cases commercialised and put in the hands of customers? Mareels is confident that production-ready solutions from Sophos will appear this year. There are, however, some regulatory hurdles to overcome first. “It’s one thing testing with an OpenAI model hosted in the United States. It’s another thing making it available to 6,000 Sophos customers.” Issues around data privacy protection – within and across regions – need to be resolved. “As the technology and regulations mature in tandem we will develop production-ready solutions in lock-step.”
Once available, Mareels is optimistic that generative AI tools will prove transformational. “Good detection, investigation and response is fundamentally a data problem. To detect threats relies on us ingesting events from lots of systems,” he says. “How do I take 30 event streams flying at me, pull them in and detect threat? GenAI is how.”