The double-edged sword of AI: Navigating evolving cybersecurity threats
Today, a cyberattack is launched roughly every 39 seconds. From phishing attacks to ransomware, cybercrime comes in many shapes and sizes, but no matter what the format of the attack, the results are devastating.
Cybercrime is on track to cost us $9.5 trillion in 2024. And with AI now being exploited by bad actors to commit more sophisticated attacks on a larger scale, that number will only increase.
So what does this evolving threat landscape look like from the trenches? And what are businesses doing to defend their most valuable digital assets against the fast-developing danger of AI-powered cybercrime?
RiverSafe’s recent report surveys CISOs from across the UK about their experiences in today’s cyber environment—and what challenges they’re facing as they fight back against cybercriminals in what’s shaping up to be a long-term AI arms race. Here are some of the key takeaways to help you prepare for a growing torrent of cyber threats.
Be aware of how AI is changing the threat landscape
One in five CISOs cite AI as the biggest cyber threat, as AI technology becomes both more available and more advanced.
AI tools are equipping cybercriminals with new abilities, and supercharging their most effective methods to help them levy attacks faster and on a larger scale. According to the National Cyber Security Centre (NCSC), AI is already being widely used in malicious cyber activity and “will almost certainly increase the volume and impact of cyberattacks, including ransomware, in the near term.”
One of the simplest, and most devastating, ways that AI is helping cybercriminals is by facilitating the modification of common attacks to make them more difficult for antivirus software, spam filters, and other cybersecurity measures to detect them.
Take malware for example: a potentially crippling technique that does more damage the longer it manages to go undetected. With AI, hackers can morph malware infections to enable them to hide from antivirus software. Once an AI-assisted piece of malware is clocked by a system’s defenses, AI can quickly generate new variants that the system will not know how to identify, allowing the malware to continue to lurk within your environment and steal sensitive data, spread to other devices, and carry out further attacks unnoticed.
And that’s just one use case. Cybercriminals are also using AI to bypass firewalls by generating what appears to be legitimate traffic, generating more effective and convincing social engineering content like phishing emails, and creating deepfakes to trick unknowing victims into handing over sensitive information.
Put policy in place to minimize the risk of AI misuse
It’s not only malicious outsiders that can use AI to harm your organization. Your employees, simply by innocently using AI tools to make their lives easier, can put your business at greater risk of suffering a major data breach.
One in five security leaders admitted that they’d experienced a data breach at their organization as a result of employees sharing company data with AI tools such as ChatGPT.
The accessibility and ease of use of generative AI tools have made them a popular option for employees, helping them to complete tasks or find answers to queries in a fraction of the time it would take to do so manually.
The vast majority of employees using these handy and seemingly straightforward tools don’t consider where the data they input into them is going, or how it might be used. Since they’re not sharing information directly with another person, many users won’t think twice about sharing proprietary business data with a chatbot if it helps them to do their jobs.
But data inputted into generative AI tools isn’t necessarily safe. In 2023, ChatGPT experienced its first major data breach, exposing payment details and other PII of ChatGPT Plus subscribers.
These tools became ubiquitous almost overnight, and now companies are playing catch up to try and mitigate the risks involved. While some companies have taken extreme measures in response, issuing outright bans on the use of generative AI tools across their organizations, such actions should only be a short-term stopgap. The reality is that generative AI is here to stay, and provides many advantages to businesses and employees when handled properly. Education and carefully managed policies are a far better route to go down to make sure your business is enjoying the benefits of AI while reducing security risks.
Don’t underestimate insider threats
A massive 75% of respondents said they believe insider threats pose a greater risk to their organization’s cybersecurity than external threats.
It’s well known that human error is one of the leading causes of data breaches and security incidents. And since these errors are often the result of ignorance or genuine, unintentional mistakes rather than a targeted attack, they’re also one of the most difficult things to defend against. The wide “attack” vector for insider threats is another reason why they’re so tricky to mitigate, with potential risks coming from not only employees, but also contractors, third parties, and anyone else with legitimate access to data or systems.
There’s clearly a widespread understanding of the damage insider threats can cause, but protecting against them is a challenge. Almost two-thirds (64%) of CISOs said their organization does not have sufficient technology to protect against insider threats.
With occurrences of insider threat-led incidents spiking by 47% over the past five years, that represents a shockingly high number of businesses that don’t have the right tools to tackle insider threats.
So what’s fueling this sharp increase? An ever-expanding attack surface is one factor. Digital transformation is the order of the day, and businesses are now more reliant on cloud solutions and infrastructure. While these solutions are often inherently more secure, the increasing complexity and interconnectedness of our IT environments can make maintaining appropriate access levels and proper security configurations a challenge.
And it’s not only IT infrastructure that’s becoming more intricate. Digital supply chains are growing too, with organizations connecting to other businesses, partners, suppliers, and software vendors in ways that create new doorways into your environment for malicious attackers. In fact, it’s estimated that trusted business partners are now responsible for up to 25% of insider threat incidents.
The threat that AI presents to cybersecurity is increasing from both internal and external angles—and yesterday’s security strategies aren’t going to cut it if organizations want to mitigate the potentially massive damages that AI-fuelled attacks can cause.
Businesses must revamp their cybersecurity policies, best practices, and employee awareness training to make sure that they’re prepared for a new age of cyber threats.
We’ve listed the best patch management software.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro