The EU’s approach to artificial intelligence centres on excellence and trust
The EU’s Artificial Intelligence (AI) Act is the result of a reflection that started more than ten years ago to develop a strategy to boost AI research and industrial capacity while ensuring safety and fundamental rights. Weeks before the official publication which will mark the beginning of its applicability, the EU Delegation is hosting in London Roberto Viola, Director General of DG CONNECT for an in conversation event, moderated by Baroness Martha Lane Fox.
The aim of the EU’s policies on AI is to help it enhance its competitiveness in strategic sectors and to broaden citizens’ access to information. One cornerstone of this two-pillar approach – boosting innovation, while safeguarding human rights – was the creation six years ago, on 9 March 2018, the expert group on artificial intelligence to gather expert input and rally a broad alliance of diverse stakeholders. Moreover, to boost research and industrial capacity the EU is maximising resources and coordinating investments. For example, through the Horizon Europe and the Digital Europe programme, the European Commission will jointly invest in AI €1 billion per year. The European Commission will mobilise additional investments from the private sector and the Member States, bringing an annual investment volume of €20 billion over the course of the digital decade. The Recovery and Resilience Facility makes €134 billion available for digital. In addition to the necessary investments, to build trust the Commission has also committed to create a safe and innovation-friendly AI environment for developers, for those companies that embed their products and for end users. The Artificial Intelligence Act is at the core of this endeavour. It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation in Europe. Taking into account the degree of potential risks and level of impact, the regulation establishes obligations for AI, which are based on a proportionality approach. It flags certain areas as entailing an “unacceptable risk”. For these areas, the Act bans the use of certain AI applications, which pose substantial threat to citizens’ rights, like social scoring or emotion recognition in schools. The AI Act then goes on to impose obligations for high-risk applications, e.g. in healthcare and banking, and introduces transparency obligations for medium risk applications, like general-purpose AI systems. These provisions are complemented by regulatory sandboxes and real-world testing that will have to be established at national level and made accessible to SMEs and start-ups to develop and train innovative AI before its placement on the market.