Generative AI

Fight fire with fire? Gen AI as a defence against AI powered cyberattacks


A 15-second clip of someone’s voice is all it takes for OpenAI’s powerful text to voice generation platform, to create a synthetic audio clip. Little surprise, they’ve deemed Voice Engine too risky for general release, for now, though envisioned utility includes reading assistance, translations and applications for the differently abled. OpenAI’s apprehension is understandable. Generative AI tools are arguably the most transformative technology of our time, yet Large Language Models (LLMs) at its foundation are being used by threat actors to generate phishing attacks, malware, morphed digital identities, deepfakes and almost-realistic deceptive communication.

Text to video generation systems are rapidly improving, such as this example of OpenAI’s Sora that can generate high quality videos with text prompts. (Image from OpenAI)

“Cybersecurity is inherently a data problem — the more data that enterprises can process, the more events they can detect and address,” said Jensen Huang, founder and CEO of Nvidia, at a recent keynote announcing tools to build gen AI ‘copilots’. AI generated content, now increasingly realistic, is blurring fine lines between reality and generations, for consumers as well as enterprises.

HT launches Crick-it, a one stop destination to catch Cricket, anytime, anywhere. Explore now!

The hope is, AI can improve detection of cyberattacks and identify malicious emails as well as malware or phishing, ultimately making them easier to counteract. The scale of these threat is such, consultancy firm McKinsey expects cybersecurity space to be worth as much as $2 trillion by 2025, up from $150 million in 2021.

It is however no longer possible to distinguish between consumers and enterprises as separate streams, as we often do with technology and solutions, since generative AI has blurred those lines. Similar toolsets are available to consumers and enterprise subscribers. Google Gemini and Microsoft’s Copilot, two examples. Any improvements to LLMs for enterprise and cloud systems, will benefit consumers too.

Late last week, tech company Cisco announced their Cisco Hypershield solution for enterprises, to protect applications, devices and data across public and private data centers as well as cloud networks. The data that’s being protected by companies using this solution (and more, that are similar), is our data. More to the point, Cisco is collaborating with Nvidia for this, and their plan is for broader compatibility and capabilities.

“We’ll also extend Hypershield beyond the data center. Before long, a hospital will be able to secure its medical devices and other operational technology with Hypershield. Manufacturers will be able to do the same with the tech that sits on the factory floor,” says Tom Gillis, Senior Vice President and General Manager for the Security Business Group at Cisco.

There’s easy availability of a wide array of increasingly competent text to image, text to voice and audio to audio generation tools, some easily accessible on smartphones and PCs. That too can birth more threat actors. A broad scope of content is covered too. Beyond generating media, AI tools can assist users with generating emails and summarising documents, which can be used for deceptive communication via generated digital identities.

No longer can you smartly avoid being scammed by someone claiming to be a Nigerian prince, searching for help with banking, to route his wealth. “GenAI can already be used to enable convincing interaction with victims, including the creation of lure documents, without the translation, spelling and grammatical mistakes that often reveal phishing. This will highly likely increase over the next two years as models evolve and uptake increases,” warns UK’s National Cyber Security Center.

Is it possible for just human monitoring, and tools that don’t rely on quick compute, to be able to detect small or even large-scale attacks? The situation isn’t very confidence inspiring. “Human analysts can no longer effectively defend against the increasing speed and complexity of cybersecurity attacks,” says David Reber Jr., who is chief security officer at Nvidia.

There are reasons for this. “The rise of generative AI has the potential to lower the barrier of entry for low-skilled adversaries, making it easier to launch attacks that are more sophisticated and state of the art,” said George Kurtz, CloudStrike’s CEO and co-founder, pointing to a new element in the multi-layered cybersecurity battle, in the security firm’s 2024 Global Threat Report.

They reference a sophisticated e-crime group Indrik Spider, which in February last year, accessed data stored in cloud-based credential manager Azure Key Vault. Logs show, threat actors visited ChatGPT simultaneously, presumably to get quick guidance on how to navigate the Azure platform.

Further, OpenAI in February, confirmed multiple threat actors used their gen AI tools for assistance in mounting cyberattacks. A group called Charcoal Typhoon used ChatGPT to research various companies and security tools, debug code and generate scripts for phishing, while a group called Salmon Typhoon needed to translate technical papers, assistance with coding, and research ways processes could be hidden on a system.

These and many other such users, had their accounts disabled, but after OpenAI dug deeper. “Our Intelligence and Investigations, Safety, Security, and Integrity teams investigate malicious actors in a variety of ways, including using our models to pursue leads, analyze how adversaries are interacting with our platform, and assess their broader intentions,” they say, in a statement.

Attacks to gain access to data are often accompanied by methods such as SIM-swapping, MFA bypass, and using stolen API keys, to gain initial access.

Fight fire with fire?

Generative AI can be used for adaptive threat detection, predictive analysis, analysing malware and phishing fingerprints and even with simulating threats that’ll allow for more real-world training scenarios.

For consumers and businesses, Microsoft’s Copilot for Security now available globally, will use generative AI to implement large-scale data and threat intelligence, with more than 78 trillion security signals being processed by the tech giant, every day. “The industry’s first generative AI solution will help security and IT professionals catch what others miss,” says Vasu Jakkal, corporate vice-president for Security, Compliance, Identity, and Management at Microsoft.

Security co-pilots aren’t not only meant to monitor for threats, but a key element is to assist a human user, with added accuracy. Check Point Software, a cybersecurity company, has released its Infinity AI Copilot for workspaces, cloud networks and local networks, working alongside humans (there’s a chatbot too) to detect and counter incoming threats. Nvidia’s recent partnership with CloudStrike, will pair latter’s cybersecurity methods with Nvidia’s Morpheus accelerated computing and NIM generative AI platforms, for quicker threat detection.

In India, there is progress towards building competence to fight online threats that use generative AI as a backbone, with an equal use of generative AI as a counter. The Karnataka Innovation Technology Society (KITS) and tech company Cisco are collaborating for a program that will help improve cybersecurity skilling for as many as 40,000 professionals in the country. Last week, C3iHub, a National Technology Innovation Hub (TIH) for Advanced Cybersecurity at the Indian Institute of Technology Kanpur (IITK) announced a new start-up incubation program for multiple domains, including cyber-security and security using AI.

Discover the complete story of India’s general elections on our exclusive Elections Product! Access all the content absolutely free on the HT App. Download now!
Stay informed on Business News, TCS Q4 Results Live, Jio Financial Services Q4 Results Live along with Gold Rates Today, India News and other related updates on Hindustan Times Website and APPs



Source

Related Articles

Back to top button