AI

Understanding the Requirements of the EU Artificial Intelligence


The European Union is currently developing a comprehensive regulatory framework aiming to address the deployment and potential risks associated with artificial intelligence (AI) systems. The European Union Artificial Intelligence Act (the “AI Act”) has yet to be finalized, but some members of the European Parliament have estimated that a final draft could be ready by Summer 2025, with enactment shortly thereafter. In the meantime, the European Commission issued a December 2023 press release, which gives insight into the coming regulations. Once enacted, “Prohibited AI” systems will be phased out after 6 months, compliance with obligations for general AI governance will be required after 12 months, and all rules, including obligations for high-risk AI systems, will go into effect within 24 to 36 months.

Who is Impacted?

The AI Act will apply to both public and private entities that make their AI systems available on the EU markets or whose use of an AI system affects people located in the EU (this includes internationally based entities that do business in the EU). As such, both AI developers, and those implementing an AI system to use within the EU, will have the responsibility of ensuring the AI system conforms to the AI Act.

Exemptions will be available for prototyping and development activities preceding the AI system’s release to market and for military or national security purposes.

What Does the AI Act Require?

The AI Act introduces a risk-based approach with four levels:

Minimal Risk and Transparency Risk AI systems fall into the category of General Purpose AI and may require:

  • Transparency such as technical documentation, training data summaries, and copyright and IP safeguards;
  • Evaluations, risk assessments, adversarial testing, and incident reporting for high-impact models that carry systematic risks (currently defined as any AI system with a total computing power of more than 10^25 FLOPs).

What are the Penalties?

  • Up to €35 million or 7% of the total worldwide annual turnover (whichever is higher) for prohibited AI violations.
  • Up to €15 million or 3% of the total worldwide annual turnover for most other violations.
  • Up to €7.5 million or 1.5% of the total worldwide annual turnover for supplying incorrect information.

For each category of noncompliance, the penalty is the lower of the two amounts for small and midsize enterprises (SME), and the higher of the two amounts for larger companies.

Notably, the EU’s General Data Protection Regulation (GDPR) also contains notice requirements for automated decision-making and the European Parliament has opined that GDPR’s notice requirements require informing data subjects when their personal information is used for AI training. Given the AI Act will have its own notice and transparency requirements there may be risk of combined penalties if these requirements are not met. Failure to meet these transparency requirements under the GDPR may result in fines up to €20 million, or 4% of the firm’s worldwide annual revenue from the preceding financial year, whichever amount is higher.

What Can You Do to Prepare?

If your business operates in the EU and has or is planning to develop or implement AI, you can begin preparing for the AI Act by keeping the four categories of AI systems in mind. You should also begin keeping records of the types of data your AI will utilize and the purposes for which the data will be used. This information will almost certainly be critical when the time comes to draft policies and disclosures to comply with the AI Act’s transparency requirements and any notice requirements that may be required under the GDPR. And, as we have seen with the GDPR, the AI Act may be a preview of future AI regulation in the U.S.



Source

Related Articles

Back to top button