AI

The first international treaty on artificial intelligence adopted by the Council of Europe


Today in Strasbourg, The Council of Europe officially adopted the “Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law”, an international and legally binding treaty regarding the responsible use of AI.

The adopted document is binding for all 46 member states of the Council, which claims that adopts a risk-based approach to the design, development, use, and decommissioning of AI systems, which requires carefully considering any potential negative consequences of using AI systems.

Council of Europe Secretary General Marija Pejčinović was quoted in the press release, saying:



“The Framework Convention on Artificial Intelligence is a first-of-its-kind, global treaty that will ensure that Artificial Intelligence upholds people’s rights.

“It is a response to the need for an international legal standard supported by states in different continents which share the same values to harness the benefits of Artificial intelligence, while mitigating the risks.”

All the member states, the European Union, and 11 non-member states contributed to drafting the treaty over the past two years. The final version of the document was approved by the Ministers for Foreign Affairs of the member nations.

The treaty covers the usage of AI systems in both, the public and the private sector, and there are two ways how to comply: “Parties may opt to be directly obliged by the relevant convention provisions or, as an alternative, take other measures to comply with the treaty”s provisions while fully respecting their international obligations regarding human rights, democracy and the rule of law.”

The Council explains that this approach is necessary because of the differences in legal systems around the world:

“The convention establishes transparency and oversight requirements tailored to specific contexts and risks, including identifying content generated by AI systems. Parties will have to adopt measures to identify, assess, prevent, and mitigate possible risks and assess the need for a moratorium, a ban, or other appropriate measures concerning uses of AI systems where their risks may be incompatible with human rights standards.”

There are already known cases of tech companies deciding not to publish their latest and most powerful technologies due to possible risks of misuse. Microsoft is “hiding” its model VASA-1 for generating lifelike talking faces, while OpenAI keeps its powerful AI model for voice cloning. a secret too.

Each party to the treaty has to establish an independent oversight mechanism to oversee compliance with the convention. Also, there will be an exception to the convention – it won’t apply to national defence matters nor to research and development activities, if those will be carried out in line with human rights. The activities related to the protection of national security have to follow international law and democratic institutions and processes.




The treaty is also open to non-European signatories.



Source

Related Articles

Back to top button