Generative AI

Generative AI Will Not Fulfill Your Autonomous SOC Hopes (Or Even Your Demo Dreams)


All the flashiest demos of generative AI show chatbots that can query the enterprise environment for just about anything imaginable. Want to know all the vulnerabilities you have? How many devices are unprotected on the network? Whether you’ve been hit with Scattered Spider in the last six months? Well then, Security Wingman™, Caroline™, Scorpio™, or Orange™ have got you covered…or so they claim.

In a previous blog, we discussed why existing security chatbots are novel but not useful in the long term—namely, because they do not fit into the analyst experience. We also covered why the autonomous security operations center (SOC) is a pipedream…which is still true today despite generative AI.

However, there’s a deeper issue at play here that is as fundamental to security as time itself: enterprise data consolidation and access is an absolute bear of a problem that is unsolved. Put more simply, security tools can’t ingest, store, and interpret all enterprise data. And more than that, security tools don’t play nice together anyway.

Let’s break this down: If we are to leverage generative AI for understanding everything about the enterprise environment, it will need to get information in one of two ways:

  1. Continuously training on all of the data in the enterprise environment.

Here’s the problem: Getting all the enterprise data into one place is challenging and costly, as we have seen with the security information and event management (SIEM) market. Further, continuous training on this data is expensive and resource-intensive. These two factors make this approach nearly impossible if accuracy and timeliness are important, which, in this instance, they are.

  1. Interpreting your request and using integrations with different security tools to query for the relevant information.

Here’s the problem: Integrating security tools remains a nontrivial and unsolved problem that generative AI does not yet fix. Until we can integrate security tools more effectively, this approach will not deliver accurate and timely results. Moreover, using LLMs to support querying large, complex data architectures simply isn’t feasible today—anomaly detection, predictive modeling, etc., are still required.

There is hope for frameworks like the Open Cybersecurity Schema Framework (OCSF) to address this; however, those frameworks are not yet comprehensive and don’t have the industry-wide adoption needed.

Generative AI is one piece of a larger puzzle

It’s easy to trust generative AI implementations because of how humans feel. However, it’s important to remember that generative AI is only one piece of the puzzle and isn’t magic. The development of foundation models for other tasks, such as time-series foundation models or computer vision models, can also benefit security operations in different ways. However, it still hasn’t solved many of the fundamental problems of security. Until we get those right, we should be wary of how and what we use generative AI for.

The original article is here.

The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends. Image credit: iStockphoto/gorodenkoff



Source

Related Articles

Back to top button