Why Human Risk Management is Cybersecurity’s Next Step for Awareness
Amid frequent warnings about the advanced capabilities of cyber threat actors, targeting human frailties remains the primary initial access method for attackers. This reality has led to the development of human risk management (HRM), a concept that places a focus on targeted, intelligence led interventions to improve security behaviors.
The scale of human risk factors was highlighted in Verizon’s 2024 Data Breach Investigations Report (DBIR), which found that 68% of all breaches involved a non-malicious human element in 2023.
Cybersecurity awareness training has been commonplace in organizations for many years, yet problems around human errors persist, such as clicking malicious links in phishing emails.
Training alone is insufficient to deal with this problem, especially as the human involved is often not to blame.
John Scott, Lead Cyber Security Researcher at CultureAI, told Infosecurity: “People will always make mistakes. That’s not a moral failing, sometimes that’s because of factors like the system, the fact that your boss is shouting at you to get something done quickly.”
This recognition has given birth to the concept of human risk management (HRM), which acknowledges that human error will occur, but proactively identifies risks for individual employees, enabling targeted interventions to be made.
How Human Risk Management Works for Cybersecurity
Traditional security awareness training serves the purpose of giving employees knowledge about cybersecurity risks but it fails to train reactions and habits, Scott noted.
For example, an employee may understand they shouldn’t share personal information with a colleague over a public Slack channel but does so because they are facing time pressures.
Scott said: “Our brain knows it, but our gut doesn’t.”
The first component of developing a HRM strategy is gaining visibility across the organization to understand where cyber risks lie with individual employees – monitoring their actual behaviors.
This can then allow ‘just in time coaching’ – giving nudges to correct behaviors in real time that are known to exist. This targeted approach also prevents training fatigue – whereby employees switch off if they are constantly being told to do something that’s not relevant to them, and even circumvent controls as a result.
“We’re not going to tell you to stop doing something you’re not doing – it’s much more respectful of your time,” explained Scott.
The nudges are not diktats – they are designed to alert the employee to a potentially unsecure behavior. For example, ‘did you mean to share that information on Slack?’ The employee can then choose whether to continue with that action.
The nudges can also be combined with security processes that make it easier for employee to take the secure choice, such as sending a message informing them that certain data will be deleted in 30 seconds unless they direct otherwise.
Scott noted: “Using good choice architecture and making the defaults the safest option is really key for nudging.”
He also emphasized that nudges should not be overused, with every nudge serving as a distraction. For example, it’s worth using if an individual employee is the only one who can do something about the issue, and is in a position to do it quickly.
“What we’re finding is that nudges are just as subject to prompt tiredness as everything else. If you’re getting nudged about everything, eventually you’ll start ignoring the nudges,” Scott explained.
Implementing Human Risk Management Effectively
Automation technologies can significantly help with gaining the necessary visibility of workforce activity – something Scott described as a “single pane of glass” to show where the risks are.
Then, organizations need to combine processes with automation to put appropriate interventions in place.
HRM programs also require continuous updates as members of staff change and new technology capabilities get rolled out across the organization. Scott said the HRM platform must be integrated with all new data sources.
A key example of this is the growing use of large language models (LLMs), such as ChatGPT, across organizations. This has resulted in confidential company information being posted into these public platforms.
“Your platform needs to be increasing the number of integrations so it can monitor all places where human risk exists,” stated Scott.
He added that the insights garnered from HRM programs can used to continually enhance awareness training by making them more targeted – both in the topics covered and the employees targeted.
For example, if new starters are found to be much more susceptible to clicking on phishing messages, they should be the focus for phishing training exercises.
Finding innovative ways to tackle cyber-threats targeting the human element will be forming a major part of the Infosecurity Europe conference program.
The event is taking place from June 4-6 at the ExCel in London. Register here to ensure your attendance.