Cybersecurity

Why the cybersecurity industry needs to re-frame the AI debate [Q&A]


Conversations around artificial intelligence and the threat it poses to humanity have been building for years. The launch of ChatGPT (or generative AI) nearly a year ago thrust the issue to the forefront of the global agenda.

Discussions have reached such a fever pitch that in April this year, technologists including Elon Musk signed an open letter begging AI teams to pause development. Despite it being widely condemned, the letter underscored how seriously the world is considering AI risks.

For cybersecurity professionals and organizations in particular, AI represents, both an opportunity and a threat. At conferences all over the world, we’re told that AI will help reduce the skills shortages plaguing our industry. However, we are also acutely aware that whenever cybersecurity teams can utilize a new technology, threat actors can respond in kind. As the debate around AI risks and benefits rages on, we spoke to Jess Parnell, CISO at Centripetal to explore how — and why — the cybersecurity industry must re-frame the AI debate.

BN: Is AI capable of independent thought?

JP: The concept of ‘human-competitive’ intelligence is one that gains a lot of attention, for obvious reasons. Statements and media activity (such as the letter) from technologists play on people’s fears that AI’s intelligence could surpass that of humans, which would have enormous ramifications for every faction of our society.

However, despite the hype — and hyperbole — AI is not capable of original thought. You can make it look like an original thought, but AI in its current form still has to pull from somewhere. This means that there absolutely needs to be humans developing the rules and algorithms on which AI is based; the capabilities of AI reflect the humans who program them.

BN: Could AI replace a human cybersecurity analyst?

JP: The current cache of AI tools should be treated as just that; tools, which analysts can use to make their jobs easier — not a substitute for the analysts themselves For example, while AI can make conclusions based on calculations, there’s always going to be a human helping to program these calculations.

Within the context of endpoint or network security for instance, an AI tool can be incredibly useful tool. It can help to understand user, network and application behaviour, analysing trends and understanding whether these trends indicate compromised accounts, malware, or unusual user activity that could be associated with an insider threat.

However, the rules on which these detections of activity are based are developed by a cybersecurity professional and can only be fully contextualized by one. For example, if an organization is undergoing an M&A process, then different parts of the network may be accessed, totally legitimately, by unusual users, or in usual patterns. While AI would likely block and flag this as malicious, it would not be able to place it into the wider business context – unlike a well informed and well-trained human analyst.

BN: How can defensive actors better frame the AI Debate?

JP: Instead of traditional AI, we should instead think of AI as Augmented Intelligence. AI was originally deployed within our industry as a solution to the skills gap: as we do not have enough analysts to fill the vacant positions, people sought to make use of a technological solution.

For the reasons outlined above, AI is still not capable of fully replacing the work done by a human analyst. This is where the concept of Augmented Intelligence comes in: as opposed to viewing AI as a replacement for the analysts, intelligently programmed solutions will be plugging the gap, helping to move the strong pipeline of entry-level talent into more senior network defenders, while AI works to action the more basic tasks. Not only does this work to utilize all the technological weapons in our arsenal, but also helps the industry to progress more junior employees into more senior positions. This is a key differentiator between AI and Augmented Intelligence as a cybersecurity school of thought: One wants to reduce the work done by humans, and the other wants to elevate this work to ensure a healthy pipeline of security talent continues to thrive.

BN: How are threat actors also using Augmented Intelligence?

JP: The answer above is good news. However, the bad news is that the bad guys are thinking the same thing as the defensive actors. Threat actors are also utilizing a program of Augmented Intelligence, using automated attacks, and scanning techniques to search for relevant targets, which would then be leveraged by a threat actor once qualified. This level of sophistication and automation being used in an offensive capacity is already proving disastrous and puts to bed the idea still held by some organizations that ‘nobody would want to target me’. As these cybercriminals are automating the first stage of investigation, they don’t need to target you specifically. They may, however, find something that makes you an appealing target among the automated data set they have generated.

Image credit: BiancoBlue/Dreamstime.com





Source

Related Articles

Back to top button