Google Extends Generative AI Reach Deeper into Security
Google this week extended its effort to apply generative artificial intelligence (AI) to cybersecurity by adding an ability to summarize threat intelligence and surface recommendations to guide cybersecurity analysts through investigations.
Announced at the Google Cloud Next ’24 conference, these additions to the Google Chronicle cybersecurity platform. They are based on the Gemini large language model (LLM) that Google is extending by exposing it to the cybersecurity data it collects.
Google also is adding more security capabilities to the Google Cloud Platform (GCP). The company is including in a private preview a natural language interface for Google Cloud Assist, through which its Gemini LLM can provide a summary of potential attack paths. Cybersecurity teams can also employ Gemini to provide recommendations to improve controls of permissions, manage encryption keys that an Autokey tool, available in preview, will automatically create and determine when to invoke confidential computing services based on processors from Intel and NVIDIA.
Google is also making available previews of a Privileged Access Manager (PAM) tool to manage permissions in near real-time and a Principal Access Boundary to enforce policies based on identity. A next-generation firewall (NGFW), tools for thwarting distributed denial of service (DDoS) attacks and data protection services are also now generally available.
Finally, Google is previewing an audit management tool and is making available Chrome Enterprise Premium to embed threat and data protection, zero trust access controls, policy controls, analytics and reporting on any endpoint.
Eric Doer, vice president of engineering for Google Cloud security, said that it generally makes more sense to train a single LLM to improve cybersecurity than it does for each provider of a cybersecurity tool or platform to build and deploy its own LLM. As such, Google is working with third-party partners such as Palo Alto Networks to jointly apply AI to cybersecurity.
Regardless of how an LLM is trained, the scope of the cybersecurity tasks it can perform is only going to increase. As their underlying reasoning engines advance, cybersecurity teams will use a natural language interface to instruct the LLM to perform that task. They won’t need to master the programming nuances of an IT automation platform.
Each organization will need to determine to what degree they trust a generative AI platform to autonomously perform tasks. However, it should soon become simpler to identify vulnerabilities and automatically remediate them. Eventually, every cybersecurity team is will have a really smart AI assistant capable of performing a wide range of tasks that previously would have taken hours to complete, Doer added.
Longer term, generative AI should make it easier for organizations to “lock shields” and collaboratively improve cybersecurity without having to expose any sensitive data to each other, noted Doerr. At this juncture, increased reliance on AI is inevitable to mitigate threats, reduce toil and make up for a chronic shortage of skills that has hampered cybersecurity teams for decades, he added.
Less clear is to what degree those advances in AI will fundamentally change the way cybersecurity has historically been achieved and maintained as the roles of cybersecurity teams continue to rapidly evolve.
Photo credit: Jakub Żerdzicki on Unsplash
Recent Articles By Author