Cybersecurity Policy – Developments to Watch
New Cybersecurity Rules in the EU
Technology legislation passed in the European Union can impact the whole world, and several cybersecurity bills are currently moving through EU lawmaking bodies.
Arguably the most well-known is the EU AI Act, which subjects all AI systems to regulations around risk management, transparency, and reporting, and bans AI tools deemed high risk. This law applies to all AI systems used in the EU, regardless of the location of the deployer or provider.
Other EU security legislation to watch include the recently passed Network and Information Security 2 Directive (NIS2), which goes into effect in October 2024, and the Digital Operation Resilience Act (DORA), effective in January 2025. NIS2 creates new cybersecurity risk management obligations, while DORA establishes obligations for operational resilience. Both apply to all entities operating in the EU.
Top Cybersecurity Trends in 2024
The Growth of AI-Driven Risks
Perhaps the most significant cybersecurity development of the past year is the ubiquity of AI.
Alison King, vice president of government affairs at cybersecurity provider Forescout, says that a primary cybersecurity concern should be ensuring “AI tools are used securely and ethically.”
The hype surrounding AI may lead organizations to adopt AI technologies before fully vetting their security, perhaps making the mistake of inputting sensitive information into tools built on public AI platforms or failing to anonymize data used to train a GPT.
“As organizations strive to leverage AI responsibly, they must balance innovation with risk mitigation,” King says. “While defenders stand to benefit from AI’s capabilities, malevolent actors can weaponize these same technologies for nefarious purposes.” In particular, experts predict that generative AI could lead to more convincing social engineering attacks.
Failing to vet the security of AI platforms may also render investments in those platforms void in the future, as governments have also been taking note of AI’s risks and potential. Draft legislation related to AI has been proposed both in the U.S. and abroad.
Governmental Attention on AI
In the coming months, “Look to governments to increase focus on AI systems, toeing the line between innovation and security,” advises Eric Skibinski, consultant at FiscalNote. “These governments are excited to leverage the new technology, but are also concerned with the risks associated with it.”
At the federal level, Biden has issued an executive order on AI that encourages, among other things, higher standards for AI security. In response, the Cybersecurity and Infrastructure Security Association (CISA) has published guidelines on securing critical infrastructure against AI-related threats, and the Department of Homeland Security (DHS) has created an AI safety board.
In Congress, Senate Majority Leader Chuck Schumer (D-NY) recently proposed a $32 billion annual spending plan for AI systems, and a Senate AI working group has released a road map for AI-related proposals.