AI under new management: EU sets global standard with Artificial Intelligence Act
On May 21, the European Union (EU) came out with the world’s first Artificial Intelligence Act. The legislation is meant to establish a unified regulatory and legal framework for AI. The Act also establishes the European Artificial Intelligence Board to facilitate national cooperation and ensure compliance.
The new law aims to balance the needs to promote the development and adoption of safe and trustworthy AI systems and, at the same time, make sure that the fundamental rights of citizens are protected.
Similar to the EU’s General Data Protection Regulation (GDPR), the AI Act has extraterritorial applications — meaning it applies to non-EU providers with users within the EU. The Act covers a wide range of AI applications across various sectors, with exceptions for systems used solely for military, national security, research and non-professional purposes. As a product regulation, it does not confer individual rights but regulates AI system providers and professional users.
In response to the rise of generative AI systems, such as ChatGPT, the draft Act was revised to accommodate these general-purpose capabilities, which initially did not fit the main regulatory framework. More restrictive regulations are planned for powerful generative AI systems with systemic impacts.
The AI Act categorises non-exempted AI applications based on their risk of causing harm, dividing them into four levels: unacceptable, high, limited, and minimal risk, with an additional category for general-purpose AI. Applications deemed unacceptable are banned. High-risk applications must adhere to stringent security, transparency and quality obligations, and undergo conformity assessments. Limited-risk applications are subject to transparency obligations, while minimal-risk applications are not regulated. For general-purpose AI, transparency requirements are imposed, with additional evaluations for high-capability models. This risk-based classification ensures that the regulation addresses the varying degrees of potential harm posed by different AI applications.
The new regulation will come into effect two years after its commencement, with certain exceptions for specific provisions. This timeframe provides entities in this sector ample opportunity to establish governance mechanisms to meet the new legislative requirements.
Organisations may use this time to develop a compliance pathway, defining timelines, milestones and compliance audit procedures. They could also set up a risk management framework. This would not only ensure compliance with the Act, but also help develop a customer base.