AI

Artificial Intelligence: The European Union sets unprecedented limits


The AI Act, the new European regulation on AI, is characterized by a four-tiered risk pyramid, ranging from a green light to outright prohibition. Unprecedented on a global scale, it will be definitively adopted May 21 in Brussels. It will not come into full effect until 2026.

This marks the end of a long legislative process (three years) that was at times disrupted by the emergence of new technologies, such as ChatGPT in late 2022. Such developments forced lawmakers to revise their drafts, further intensifying the political discussion.

The four risk levels result from intense “bargaining” between European institutions: Which system should be integrated into which category? For each decision, negotiators knew they were walking a tightrope: preserving security and individual freedoms without stifling innovation in Europe at the risk of favoring the United States or China. Tech lobbies, which were heavily involved in the debate, did not overlook this point.

The final text bears their mark. Even at the highest risk level, the prohibition of these technologies is rarely total. The introduction of numerous exemptions (to allow secure uses of AI, in particular) results in a paradoxical situation: by setting a prohibition accompanied by exemptions, practices previously banned are now authorized. Such is the case with real-time facial or emotion recognition in specific contexts.

▶ Low or No Risk

No restriction (unless the system undergoes substantial modifications, in which case risks must be reassessed)

  • Video Games: Nvidia, Ubisoft, and other industry giants have understood this well: so-called generative AI (capable of producing new content after being trained on vast amounts of data) offers tantalizing possibilities for video games. Tomorrow’s gamers will likely be able to, among other things, “talk” orally with their virtual characters.
  • Spam Filters: These software programs automatically filter unwanted or malicious emails. Email is a prime target for infections and other cyberattacks.

▶ Moderate risk

Transparency obligation (indicating that the content was generated by artificial intelligence)

  • Image Manipulation: Deepfakes, these increasingly realistic and easy-to-create fake photos or videos, regularly stir controversy. Some of these manipulations make political figures say things they haven’t said; others portray women in lascivious or even pornographic poses without their consent.
  • Chatbots: As such, conversational agents (including the famous ChatGPT) are considered “moderate risk.” However, the foundational models on which they are technically built are subject to stricter obligations, more stringent than what the industry and some states wanted. For example, the data used to train these models must be disclosed so that potential rights holders can verify if their content was used.

▶ High risk

Uses subject to compliance procedures (verifying the quality of training data, minimizing discrimination risks, providing human oversight, etc.)

  • CV screening: Software used to filter job applications. Their main issue lies in the biases they carry. Amazon, for example, abandoned such a tool in 2018 because it discriminated against women applying for technical jobs.
  • Predictive justice: This involves predicting the outcome of a legal dispute after algorithmically analyzing large legal corpora. “Legaltech” startups offering this service have proliferated in France, where there are now several hundred.
  • Autonomous vehicles: At the end of the 2010s, it was imagined that Uber drivers would soon be replaced by robot taxis. However, since these are still far from being perfected, the focus is now on less ambitious projects like automatic emergency braking or low-speed hands-free driving.
  • Fake document detection: These detectors are among the new techniques available to law enforcement, such as thermovision cameras for detecting illegal border crossings. More broadly, all AI systems used for migration management are considered “high risk.”
  • Credit scoring: These “scoring” software programs, already used in the banking world, aim to assess a person’s creditworthiness when they apply for a loan. However, they can perpetuate old discrimination patterns (based on origin, age, disability, etc.).

▶ Unacceptable risk

Use prohibited

  • Social scoring: These systems rank individuals based on their behavior, like in China where, for example, citizens who do not pay their debts lose points while those who buy healthy food gain points. Being well-ranked facilitates access to healthcare, bank loans, or mobility.
  • Subliminal techniques: Messages designed to be below the level of consciousness. Introducing an image in an advertisement every twenty-fifth of a second will be invisible to the viewer but could serve to manipulate them. However, no scientific consensus has emerged on this point.
  • Real-time facial recognition: This topic has been one of the most debated. The text ultimately provides exemptions for certain law enforcement missions: preventing a terrorist threat, identifying suspects of serious crimes, or searching for missing persons.
  • Emotion recognition: Software claiming to detect emotions from facial expressions is banned in educational or professional settings but allowed for border control. The goal is to determine whether a migrant is “honest” about their migration request.
  • Predictive policing: This involves assessing, based on a person’s characteristics, the risk that they will commit criminal offenses. But the ban is not total. These systems can be authorized in criminal investigations to complement human assessments based on verifiable facts.



Source

Related Articles

Back to top button