Cybersecurity

In conversation: companies’ own AI applications are ‘a huge cybersecurity problem’ – expert


Artificial intelligence (AI) poses a potentially serious cybersecurity threat to companies that have deployed it as part of their service or offer – not just through its use by criminals to perpetrate attacks – according to an expert in the field.

While the threat posed by bad actors using AI to deliver attacks has been widely discussed, Peter Garraghan, CEO and CTO of Mindgard, which provides cybersecurity for AI specifically, tells Verdict: “The problem we’re talking about here is cybersecurity threat against AI itself.”

Perhaps the most common and thus at-risk use of AI by companies is for customer service chatbots, which are increasingly prevalent and are typically tailored with company-specific data in the background.

Garraghan, who is also a chair professor of computer science at Lancaster University specialising in AI security and systems, founded Mindgard in 2022 after realising the potential severity of the issue around a decade ago.

“AI is not magic,” he says. “It’s still software, data and hardware. Therefore, all the cybersecurity threats that you can envision also apply to AI.”

By way of example, Garraghan gives the analogy of SQL injection – a technique via which vulnerabilities in a web application can be exploited by code inputted into fields in the likes of website login or contact forms. A similar approach called prompt injection can be used for public-facing AI applications. If not properly secured, AI tools can effectively be coaxed into giving out source code and instructions, business IP or even customer data.

Access the most comprehensive Company Profiles
on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free
sample

Thank you!

Your download email will arrive shortly

We are confident about the
unique
quality of our Company Profiles. However, we want you to make the most
beneficial
decision for your business, so we offer a free sample that you can download by
submitting the below form

By GlobalData






Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Similarly, AI tools can be reverse-engineered in the same ways that other applications can be to identify vulnerabilities.

Of the gravity of the problem, Garraghan says: “We could envision even four or five years ago massive adoption, nation-state problems, disenfranchised people, organisations at risk. We need to think about this.”

Threats to AI applications

The potential for leaked data is likely to make any business take note, but the ease with which AI applications might leak data is alarming.

“There are cybersecurity attacks with AI whereby it can leak data, the model can actually give it to me if I just ask it very politely to do so,” explains Garraghan. This was exemplified in January when Gab AI, a platform launched by right-wing social media company Gab, was coaxed into revealing its instructions. OpenAI’s GPT platforms have previously revealed data upon which they are built too.

Garraghan continues: “There are other attacks where I can figure out what data it has and then reverse engineer it without even seeing it, or I can figure out how the AI can be bypassed or tricked, so I can get access to other systems from it. I think data leakage is definitely cross-cutting [of industries] – and that includes both externally facing and internally.”

Among the other significant threats he points to is model evasion, whereby input data is designed to manipulate or subvert the operation of the AI model.

“Let’s say I have some sort of document or face scanner for trying to identify someone’s identity,” he says. “If you know how the model works and some trickery, you can figure out how do I trick it so I can bypass detection or I can be misclassified. There are quite a few reported case studies of people doing financial fraud by tricking vision models, for example.”

Malicious commands hidden in audio prompts and the poisoning of data to deliver inaccurate responses are other threats Garraghan notes, and he adds that the overarching impact for businesses – as with other cyberattacks – can be reputational damage.

Who’s at risk and what can be done?

As with cybersecurity more broadly, there is naturally greater risk for industries in which the stakes are higher. Financial services and healthcare are two sectors, for example, which necessarily must be more secure than others.

Garraghan says: “There is a correlation here, which is that the more confidential and the more regulated you as an industry, the more at risk you are from AI – but also, from experience, the less they’re adopting. I don’t think they’re slower. Let’s say it’s because they have a lot of genuine risks to get through.”

In terms of tackling those risks within any company, though, he is clear that AI applications will require – or require now – their own layer of protection.

“You currently have cybersecurity tools, and they specialise in certain things,” says Garraghan. “You have a security posture management system, you have a detection response system, you have a firewall, you have very shift left in terms of design, code scanning – all these types of things. You’re going to need an AI equivalent to help with these. Those type of things specialises just in AI and machine learning and neural networks.

“You’re going to need a code scanner for neural networks, you’re going to need a detection response system for neural networks, you’re going to need a security-testing, red-teaming capability …  If you catch things upstream of problems, it is much easier to remediate and fix it as opposed to runtime. The best practice we encourage for organisations is whenever they build AI models, or wherever they purchase AI applications or services, before anything goes live, the more we can fix before it goes to production, it is so much easier to then identify what problems are to actually fix.”

In a nutshell, Garraghan’s take is as follows: “The best thing anyone can do in this space is replace the word AI with software or application. Yes, you need application testing and application threat detection, AI is no exception.”






Source

Related Articles

Back to top button