Cybersecurity

Can we fix the human error problem in cybersecurity?


Kyndryl’s Kris Lovejoy discusses a ‘holistic approach’ to building cyber resilience and the need for AI cyber defences to beat AI cyberattacks.

Click here for more Cybersecurity Week stories.

Cyberattacks continue to be a major threat for organisations of every size and can lead to disastrous consequences in some cases.

Ransomware attacks can lead to the data of millions of individuals being stolen by criminals, while paid ransoms end up funding future malicious activity. There are also phishing attacks, floods of malware, DDoS attacks and more for organisations to be wary of.

Cybersecurity teams have lots of tools at their disposal to protect their organisations from attackers, but unfortunately, new technology also presents new avenues for attack – with AI being a new way to both defend and attack an organisation.

Kris Lovejoy, a global security and resiliency leader at Kyndryl, is all too aware of the changing cybersecurity landscape. She recently spoke to SiliconRepublic about cyber resilience and how businesses can prepare for upcoming EU regulation around cybersecurity.

For cybersecurity week, Lovejoy has returned to discuss the rapidly evolving cybersecurity threat landscape. A recent IDC report commissioned by Kyndryl suggests most IT leaders perceive malware as “the most significant risk to their business”. This report also highlighted the prevalence of ransomware attacks.

“70pc of respondents said that they had been successfully targeted by ransomware within the last year, with two-thirds of those respondents choosing to pay the ransom,” Lovejoy said. “The vast majority of those surveyed said that the attack exfiltrated company data, causing significant damage and disruption to the business.”

The human error problem

Lovejoy said CIOs can’t always predict the next cyberattack, but that they can work towards “building cyber resilience”.

“By cyber resilience, I’m talking about an approach that combines cybersecurity with business continuity and disaster recovery to anticipate, protect against, withstand and recover from adverse cyber events,” she said.

Lovejoy also believes that this type of resilience “transcends technology” and requires both a shift in mindset and the people and processes of an organisation “coming together to act in a nimble and agile manner”.

“Organisations need to start viewing the pursuit of resilience in a more holistic way,” she said. “The first step is to identify the critical digitally enabled services upon which the organisation is dependent. Next, create an application to infrastructure mapping and assess whether controls in place make it possible for you to reasonably protect against disruptive attack, detect breaches, and respond and recover if things go awry.

“Lastly, build a strategy and roadmap for improving resilience and modernising infrastructure with the goal of improving security and resilience outcomes.”

On the point of transcending technology, Lovejoy also claims – from her own experience – that most cyberattacks can be traced back to “a human being who made a mistake”. Various examples exist of breaches beginning from a staff member clicking a malicious link.

More extreme examples include the data breach of the Police Service of Northern Ireland (PSNI) last year, which caused the personal details of all PSNI staff to be shared in a public document due to “human error”.

“Continual focus on cybersecurity awareness – instilling a culture of cybersecurity awareness and healthy scepticism – acts like herd immunity,” Lovejoy said. “The less humans subject to ‘infection’ means that we can statistically reduce the likelihood of a successful attack.”

A report from earlier this week suggests human error and AI are the key challenges concerning CISOs, based on insights from 1,600 CISOs globally. An earlier report from IT consulting company STX Next also claimed that human error was a major concern among CTOs.

The need for AI – to face AI

Lovejoy also noted how new technology increases the risk of individuals being tricked into making a mistake.

“Generative AI heightens the risk of successful manipulation of these inadvertent actors by enabling incredibly realistic and sophisticated phishing attacks,” Lovejoy said. “It also provides threat actors with opportunities to craft malware which can more successfully evade common controls.”

Lovejoy believes that AI is likely to increase the volume and the impact of cyberattacks over the next few years, a concern shared by various experts in the industry.

“Cyber criminals are compromising organisations’ user accounts with complex targeted social engineering attacks,” she said. “The growing use of AI techniques only serves to increase the challenge of detection.

“Unless we arm ourselves with AI-enabled cyber defences that are stronger than AI-enabled cyberthreats, it will be difficult, impossible even, for businesses to adapt. CIOs need to validate essential control implementation, stress test response and recovery capabilities, and adapt training and awareness programs to include potential for multi-vector social engineering attacks.”

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.



Source

Related Articles

Back to top button