Brussels heralds chain reaction of legislation on artificial intelligence – Opinion
The Artificial Intelligence Act, which was approved by the European Council, late last month, is a ground-breaking law that aims to harmonize rules on AI. The flagship legislation follows a “risk-based” approach, and it can set a global standard for AI regulation.
The new law aims to foster the development and uptake of safe and trustworthy AI systems across the European Union”s single market by both private and public actors. At the same time, it aims to ensure respect of the fundamental rights of EU citizens and stimulate investment in artificial intelligence in Europe and spur innovation.
The new law categorizes different types of artificial intelligence according to risk. AI systems presenting only limited risk will be subject to very light transparency obligations, while high-risk AI systems will need to be authorized and will be subject to a set of requirements and obligations to gain access to the EU market.
AI systems such as cognitive behavioral manipulation and social scoring, for example, will be banned from the EU because their risk is deemed unacceptable. The law also prohibits the use of AI for predictive policing based on profiling and systems that use biometric data to categorize people according to specific categories such as race, religion or sexual orientation.
Notably, the fines for infringements of the act are set as a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. Small and medium-sized enterprises and start-ups are subject to proportional administrative fines.
Considering that AI companies are mainly large multinational and cross-industry companies, and their global turnover comes also from markets outside the EU and also from fields outside AI, the penalties for violating the EU AI Act can also lead to losses in their turnover in other markets and other industries, which has further increased the deterrent effect of the act on these large companies, most of which are US companies.
That might prompt the United States to follow suit by passing its own AI act to protect US companies’ interests in the EU. Meanwhile, some US policy consultants suggest the EU act still has considerable space for improvement, and the US’ AI act should not only focus on risk prevention and control, but also pay more attention to providing support for AI innovation.
That being said, the EU’s AI act will surely prompt more countries to accelerate their legislative work on AI, which can draw on the experience of the EU in the field and make their own AI acts more flexible and adaptable to their respective national conditions.