Cybersecurity

UK Government Publishes AI Cybersecurity Guidance


Artificial Intelligence & Machine Learning
,
Geo Focus: The United Kingdom
,
Geo-Specific

Guidance Is First Step to Global Standard, Says Minister for AI

Image: Shutterstock

The U.K. government released voluntary guidance intended to help artificial intelligence developers and vendors protect models from hacking and potential sabotage.

See Also: Does Office 365 Deliver The Email Security and Resilience Enterprises Need?

Released on Wednesday, the British government’s AI code of practice lists recommendations such as monitoring AI system behavior and performing model testing.

“Organizations in the U.K. face a complex cybersecurity landscape, and we want to ensure that they have the confidence to adopt AI into their infrastructure,” said Minister for AI and Intellectual Property Jonathan Camrose.

The U.K. government said companies should strengthen the AI supply chain security and decrease potential risks from vulnerable AI systems, such as data loss. The guidance recommends measures such as procuring secure software components including models, frameworks, or external APIs only from verified third-party developers and ensuring the integrity of training data sourced from publicly available sources.

“Particular attention should be given to the use of open-source models, where the responsibility of model maintenance and security becomes complex,” the guidance states.

Other measures include training AI developers on secure coding, having in place security guardrails for different AI models, and being able to interpret and explain AI models.

The U.K. government intends to transform the guidance into a global standard to promote security by design in AI systems. As part of the plan, the government opened a consultation inviting responses until July 10.

The Conservative government vowed during a November summit to push for a shared global approach to AI safety (see: UK’s AI Safety Summit to Focus on Risk and Governance).

The guidance comes just days after the U.K. AI Safety Institute released an AI model evaluation platform called Inspect, which allows startups, academia and AI developers to assess specific capabilities of individual models and produce a score based on their results

The U.S. and the U.K. AI Safety Institutes in April said they will work together to develop safety evaluation mechanisms and guidance for emerging risks (see: US and UK Partner to Align on AI Safety and Share Resources).





Source

Related Articles

Back to top button