AI: On both sides of the cybersecurity fence
Threat actors are everywhere, looking for a way into your systems, whether via phishing, planting malware or breaching via social engineering. Just two examples in recent hacking history show the sheer amount of exposed data:
- The size of the Real Estate Wealth Network breach was 1.16 terabytes of data and included 1.5 billion records, with some famous names like Elon Musk and Kylie Jenner easily identified in the records.
- UnitedHealth Group paid a ransom on a breach that included data on “a substantial proportion of people in America”; while the exact number isn’t known, imagine what a “substantial proportion” means when talking about the whole of the US population (currently headed toward around 342 million).
The IT Governance site keeps a running list of some of the larger global attacks on companies and the more notorious perpetrators. It’s likely you or someone you know has data in one of those breaches. Essentially, it means the role of cybersecurity professionals is secure as long as there are threat actors looking for a way in.
AI is on both sides of the fight
Whether there’s any element of AI in any of those attacks isn’t fully known without conducting forensics on the code. Hackers and scammers are coders who know how to use developer tools and programming languages tuned to their work to squeeze out efficiencies. It’s not a stretch of the imagination to think AI is being used by them to code – and develop attack vectors – faster and with more efficiency.
While AI might be improving the more common brute-force attacks, ransomware and phishing, AI also is improving breaches via social engineering. One type of hack is voice cloning, which seems like it comes right out of the movies, and it’s improving over time. A couple who were tricked into believing a relative was being held ransom through voice cloning is detailed in The New Yorker.
Even enterprise companies have no immunity, if this story is true. According to reports in Hong Kong, authorities said a financial firm was tricked into transferring $25 million to hackers, who used an AI deepfake video of the firm’s CEO to authorize the transfer.
Several companies are developing voice-cloning software with AI improvements, and even OpenAI said it has recently developed some improvements to its technology but is holding back on releasing an update as it assesses the risks of doing so.
Is that really who you think it is?
And just recently, Microsoft’s release of VASA-1 shows proof-of-concept wherein it uses sophisticated generative AI technology to turn a still image into a video. The sample shows videos that have audio voices embedded, so that it seems as though the still image is now animated and speaks. The possibilities for using it to create deepfakes are mind-boggling.
Fighting AI with AI
Cybersecurity professionals have their work cut out for them. If there’s anything that can be done, it’s to be proactive with cybersecurity to try to stay ahead of attacks and use due diligence to maintain sound security practices top to bottom. Already, AI is helping in a number of ways. Here are a few:
- Cybersecurity analysis tools are using AI to streamline the methods for pinpointing weak points in security. With that, many companies follow guidelines laid down by the Cybersecurity and Infrastructure Security Agency’s tool and the National Institute of Standards and Technology’s cybersecurity framework, which specifically has a component that takes AI into account. Tools from major companies like IBM, Microsoft and Amazon follow many of the suggestions, and many companies are using AI to create custom apps that follow those guidelines to build internal cybersecurity policies.
- One of the more common methods of exposure is through the sharing – whether it’s intentional or accidental – of private information within enterprise systems. Here’s where good data governance comes into play, and companies are starting to use AI to gain insight and streamline data governance policies into the software and data connectors. Amazon, IBM, Microsoft and Salesforce have a variety of tools that use AI to enable data sharing with governance policies in place.
- At the code level, there are a number of application security testing tools now using AI to speed up code vulnerability identification. Companies like Acunetix, CrowdStrike and Snyk offer AI-flavored tools for dynamic and static application testing.
Cybersecurity professionals don’t have to go it alone. AI is on your side, too.
On a side not: OpenAI program puts cybersecurity pros on defense
OpenAI unveiled a grant program that provides the outline for a new program for developing new AI-based cybersecurity technology. The company seeks out applicants for the program, which they’re funding with $1 million, and that includes grants of $10,000 in awards in the form of “API credits, direct funding and/or equivalents” for defensive (not offensive ones, for now) AI cybersecurity solutions proposals.
Recent related stories: