UK government launches AI cybersecurity codes of practice
The UK government is calling for views on a new set of voluntary guidelines for AI cybersecurity. The ‘AI Cyber Security Code of Practice’ will include recommendations for developers on how to best protect their AI products and services against possible breaches, sabotage or tampering. During a speech at the CYBERUK conference, technology minister Saqib Bhatti said the new guidelines would help to establish a global standard for AI cybersecurity and afford British businesses greater protections against cyberattacks.
”We have always been clear that to harness the enormous potential of the digital economy, we need to foster a safe environment for it to grow and develop,” said Bhatti. “This is precisely what we are doing with these new measures, which will help make AI models resilient from the design phase.”
AI cybersecurity guidelines partially based off NCSC guidelines
Developed by the Department for Science, Innovation & Technology (DSIT) and based on the National Cyber Security Centre (NCSC) guidelines on secure AI system development published late last year, the publication of the draft AI Cyber Security Code of Practice comes amid mixed news for the UK cybersecurity scene. Though the sector itself has grown by 13% since last year according to government figures, half of businesses and almost a third of charities have reported being the victims of breaches in the same period.
The growing popularity of generative AI among businesses is likely to introduce new avenues of attack for cybercriminals. “GenAI systems are particularly vulnerable to data poisoning and model theft,” said Kevin Curren, professor of cybersecurity at Ulster University and a senior member of the Institute of Electrical and Electronics Engineers. “If companies cannot explain how their GenAI systems work or how they have reached their conclusions, it can raise concerns about accountability and make it difficult to identify and address other potential risks.”
The new AI cybersecurity guidelines will provide businesses with a list of best practices and recommendations on how to solve these challenges, said the NCSC’s chief executive Felicity Oswald. “The new codes of practice will help support our growing cyber security industry to develop AI models and software in a way which ensures they are resilient to malicious attacks,” said Oswald. “Setting standards for our security will help improve our collective resilience and I commend organisations to follow these requirements to help keep the UK safe online.”
Labour Party views on AI incoming
The call for views will extend until 10 July 2024. In the meantime, companies experimenting with AI applications would be well-placed to take their own steps to shore up their security, said Curren.
“Organisations should consult with data protection experts and keep abreast of regulatory changes,” he explained, which “helps not only in avoiding legal pitfalls but also in maintaining consumer trust by upholding ethical AI practices and ensuring data integrity. Other best practices include minimising and anonymising data use, establishing robust data governance policies, conducting regular audits and impact assessments, securing data environments, and reminding staff of current security protocols.”
Today’s call for views for both codes of practice should be viewed in the context of the Conservative government’s wider work on AI safety, said its minister for AI and Intellectual Property, Viscount Camrose. Specific policies from the opposition Labour Party, meanwhile, remain scant despite promises of a Green Paper on technology policy promised last year. However, shadow DSIT secretary Peter Kyle promised today that the party will make its views on AI clear in the next few weeks amid a policy push ahead of the general election later this year.