AI

Top Artificial Intelligence (AI) Governance Laws and Frameworks


Artificial Intelligence (AI) is changing the world quickly as several nations and international organizations have adopted frameworks to direct the development, application, and governance of AI. Numerous initiatives are influencing the ethical use of AI to prioritize human rights and innovation. Here are some of the top AI governance laws and frameworks. 

1. EU AI Act

The Artificial Intelligence Act, a historic piece of legislation designed to promote innovation and guarantee AI safety and adherence to fundamental rights, was approved by the European Parliament. It outlaws AI applications that pose a risk to people’s rights, such as some biometric systems and the ability to identify emotions in particular settings, such as schools and workplaces. The use of biometric identification by law enforcement is tightly controlled, and real-time deployment necessitates strict security measures.

It mentions how clear duties must be followed by high-risk AI systems in order to reduce possible harm, maintain openness, and provide human oversight. Transparency standards apply to general-purpose AI systems and models, and deepfakes need to be identified properly. 

2. EU AI Liability Directive

The European Parliament and Council have proposed an AI Liability Directive to address the issues that AI presents to current liability regulations. The complexity and opacity of AI make it difficult for victims to establish liability, and as a result, current national liability frameworks are insufficient for managing claims for damage related to AI. This directive aims to provide victims of AI-related harm with the same level of protection as those affected by traditional products. It intends to eliminate disjointed national adaptations of liability standards and lessen legal ambiguity for firms. The directive supports the Union’s digital and environmental objectives and is a component of a larger EU plan to advance reliable AI and digital technologies.

3. Brazil AI Bill

This law lays forth national guidelines for creating, deploying, and appropriately utilizing AI systems in Brazil. The goal of the law is to protect fundamental rights and guarantee safe, dependable AI systems that advance science, democracy, and the interests of citizens. Human-centricity, respect for democracy and human rights, environmental preservation, sustainable development, equality, non-discrimination, and innovation are the guiding principles of AI development in Brazil. The law also supports consumer protection, fair competition, and free entrepreneurship. These clauses highlight how crucial it is to have responsible AI governance that respects morality and basic rights while advancing technology and adhering to democratic ideals.

4. Canada AI and Data Act

Part of Canada’s Digital Charter Implementation Act, 2022, the planned Artificial Intelligence and Data Act (AIDA) seeks to govern AI systems to guarantee their safety, impartiality, and accountability. AI is being used increasingly in vital sectors like healthcare and agriculture, but it can also be dangerous, especially for underprivileged people. AIDA would create standards for ethical AI development, design, and application with a focus on justice and safety. Canada’s dedication to utilizing AI’s promise while defending individuals’ rights and minimizing any risks is reflected in this legislation. 

5. U.S. Executive Order on Trustworthy AI

The possible advantages and hazards of AI are emphasized in the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. It recognizes the pressing need for responsible AI governance to address social issues and avoid negative outcomes, including fraud, discrimination, and threats to national security. In order to develop and apply AI safely and responsibly, the order emphasizes a concerted effort across the government, corporate sector, academia, and civil society. 

The Administration seeks to align executive departments and agencies with eight guiding principles and priorities for AI development and governance. Collaboration with a wide range of stakeholders, including business, academia, civil society, labor unions, foreign partners, and others, will be a component of these efforts. This policy framework demonstrates a dedication to taking the lead in AI governance to guarantee its responsible growth, thereby improving American society, economy, and security.

6. NYC Bias Audit Law

Employers and employment agencies in New York City are prohibited from using Automated Employment Decision Tools (AEDTs) by Local Law 144 of 2021, which the DCWP enforces. This law forbids the use of AEDTs unless necessary notices are given and a biased audit has been completed. AEDTs are computer-based tools that significantly support or replace discretionary decision-making in employment decisions. They do this by using machine learning, statistical modeling, data analytics, or AI. By mandating adherence to bias audit and notice standards, the law seeks to ensure accountability and openness in the use of these tools.

7. China Algorithmic Recommendation Law

These regulations provide guidelines for the use of algorithmic recommendation technology in mainland Chinese internet services. They seek to safeguard national interests, control behavior, preserve social ideals, and defend the rights of individuals and groups. The state Internet department is in charge of governance and works with other pertinent organizations such as market regulators, public security, and telecommunications. Laws must be followed, ethical norms must be upheld, and providers must give equity, fairness, openness, and good faith top priority. Industry associations are requested to provide guidelines, enforce rules, and support providers in fulfilling regulatory obligations and public expectations for algorithmic recommendation services.

8. China Generative AI Services Law

With the Cyberspace Administration of China (CAC) and other government agencies issuing Interim Measures for the Administration of Generative Artificial Intelligence Services, China has taken the initiative to regulate generative artificial intelligence (AI) services. These regulations, which come into effect on August 15, 2023, control companies that offer generative AI services to the general Chinese population. Models that generate text, graphics, audio, and video are included in generative AI technology. The Interim Measures recognize potential foreign investment while promoting innovation and research. Future artificial intelligence laws are also anticipated to expand regulation beyond generative AI. Given the potential penalties or shutdowns for non-compliant services operating in China, compliance is essential.

9. China Deep Synthesis Law

This law set forth guidelines for China’s deep synthesis technology, which is utilized in online information services. To manage deep synthesis services, preserve socialist principles, safeguard national interests, and promote public welfare, they are predicated on cybersecurity, data security, and personal information protection regulations. The public security and telecommunications departments are in charge of oversight, which is overseen by the state Internet department. Deep synthesis service providers are required to abide by the law, honor social standards, and support political agendas and ideals. Industry associations are encouraged to set norms and self-discipline mechanisms for deep synthesis service providers in order to ensure legitimate operations and social accountability.

10. Peru Law 31814

In order to encourage the use of AI for social and economic development, Peru’s Executive Power passed Law 31814, establishing the country as a pioneer in AI legislation in Latin America. This regulation places a strong emphasis on ethical standards and human rights while highlighting the responsible, transparent, and sustainable use of AI. It categorizes great technology, such as AI, as being of national interest in order to improve national security, the economy, public services, health, and education. 

11. South Korea AI Act

With the proposed “Act on Promotion of AI Industry and Framework for Establishing Trustworthy AI” (AI Act), South Korea is moving forward with its legislative framework AI. By combining seven different AI-related regulations into one comprehensive strategy, this legislation seeks to fully oversee and regulate the AI industry. The AI Act places a strong emphasis on bolstering the AI sector while guaranteeing the reliability of AI systems to safeguard users. Important clauses include defining high-risk AI categories, assisting AI companies, establishing ethical standards, permitting innovation without prior government clearance, and forming an AI Committee and policy roadmap. 

12. Indonesia Presidential Regulation on AI

AI regulations are changing in Indonesia as a result of growing AI integration across businesses. The nation has taken action to address ethical issues and rules for AI use, even though specific legislation is still lacking in this area. The National Strategy on Artificial Intelligence 2020–2045 offers a foundation for the creation of AI policies. The Electronic Information and Transactions Law (EIT Law), which defines electronic agents and lays forth general guidelines for AI operators, currently governs artificial intelligence. The OJK Code of Ethics Guidelines for AI in the Financial Technology Industry and MOCI Circular Letter No. 9 of 2023 (MOCI CL 9/2023) are recent developments that highlight the ethical usage of AI.

13. Mexico Federal AI Regulation

The proposed legislation in Mexico outlines a thorough framework for regulating AI technologies. Extraterritorial applicability provisions are included, necessitating compliance by AIS providers abroad that supply services or generate data utilized in Mexico. Authorization would be supervised by the Federal Telecommunications Institute (IFT), with backing from the National Artificial Intelligence Commission. Like the EU, AI systems would be categorized according to danger levels. Even for services that are provided for free, AIS implementation would require prior authorization from the IFT. Penalties for noncompliance could reach 10% of one’s yearly salary. This law, which seeks to influence AIS development and commercialization in Mexico, parallels global trends in AI policy.

14. Chile Draft AI Bill

The legislative body of Chile has commenced deliberations on a bill designed to govern the moral and legal dimensions of AI in relation to its development, dissemination, commercialization, and application. The goal of the Bill, which has the backing of Chile’s Ministry of Science, Technology, Knowledge, and Innovation and is modeled after Europe’s 2021 Artificial Intelligence Act, is to strike a balance between technological advancement and citizen rights. It suggests defining AI, designating high-risk AI systems, creating a National Commission for AI, demanding permission for AI development and usage, and delineating the consequences of noncompliance. Chile is demonstrating its commitment to responsible technological innovation management with this legislative endeavor, which prioritizes human well-being and societal advantages in the application of AI. 

15. NIST AI RMF

NIST’s AI Risk Management Framework (AI RMF) offers organized guidelines for addressing risks associated with AI. The framework, which was created via joint efforts between the public and private sectors, focuses on generative AI and addresses 12 identified hazards. In order to help organizations establish trustworthy AI practices, it provides them with resources and actionable instructions such as the AI RMF Playbook, Roadmap, Crosswalk, and Perspectives. Founded in March 2023, the Trustworthy and Responsible AI Resource Centre promotes the adoption and compliance of the AI RMF on a global scale. The consensus-driven methodology of NIST guarantees thorough risk management for AI technology, boosting deployment confidence and dependability.

16. Blueprint for an AI Bill of Rights

The issues raised by technology and automated systems that have the potential to violate people’s rights are covered in the Blueprint for an AI Bill of Rights. With the help of technology, this effort seeks to advance society while defending democratic principles and civil rights. This endeavor is in line with President Biden’s dedication to eliminating injustices and improving civil rights. In order to protect American citizens in the age of artificial intelligence, the White House Office of Science and Technology Policy has established five guiding principles for the appropriate design, usage, and deployment of automated systems. This blueprint acts as a framework to safeguard people’s rights and direct technology advancement and policy in a way that upholds civil liberties and democratic principles.

17. OECD AI Principles

The OECD AI Principles, which were created in May 2019, support the innovative and reliable application of AI while upholding democratic principles and human rights. These guidelines highlight the following.

  1. Inclusive Development and Well-Being: Artificial Intelligence ought to promote sustainable development, human well-being, and inclusive economic prosperity. 
  2. Human-centered values and justice: AI systems ought to respect diversity, justice, and human rights without prejudice. 
  3. Explainability and Transparency: Users should be able to understand how AI systems work. 
  4. Robustness, Security, and Safety: Throughout their entire life cycle, AI systems need to be reliable, safe, and secure. 
  5. Accountability: Systems and developers using AI should take responsibility for their decisions and results.

18. OECD AI Risk Classification Framework

An organized method for assessing and categorizing AI systems according to their unique attributes and environments is offered by the OECD Framework for the Classification of AI Systems. This easy-to-use tool helps lawmakers, regulators, policymakers, and other stakeholders comprehend and weigh the advantages and disadvantages of various AI systems. The framework takes into account several aspects, each with a subset of characteristics and traits, including People & Planet, Economic Context, Data & Input, AI model, and Task & Output. Policymakers can ensure a creative and reliable approach that is in line with the OECD AI Principles by customizing their policy approaches to various types of AI systems.

19. Council of Europe Framework Convention on AI

This sets forth a convention designed to guarantee that actions pertaining to AI systems respect democratic principles, human rights, and the rule of law. Each party to the convention shall take necessary action to carry out these obligations, taking into account the gravity of the situation and the possibility of unfavorable effects on democracy, human rights, and the rule of law. The convention deals with computer-based systems that produce judgments or predictions that affect their surroundings. It pertains to actions taken by governmental bodies or private parties acting on their behalf throughout the lifespan of the artificial intelligence system. The aims of the agreement are aligned with the attention given to private actors’ activities that are not covered by state authorities.

20. Singapore AI Verify Framework

AI Verify is a software toolkit and testing framework for AI governance that is intended to evaluate AI systems in accordance with accepted worldwide AI governance framework standards, such as those of Singapore, the OECD, and the European Union. It conducts technical tests on supervised learning models for tabular and picture datasets within corporate contexts. AI Verify does not provide AI ethical standards, ensure that tested AI systems are free from bias or danger, or test generative AI or large language models (LLMs). Even though AI Verify is still a Minimum Viable Product (MVP), it recognizes that there are important gaps in the testing of AI governance and plans to open-source its toolkit to involve research groups, industry players, and developers in the advancement and enhancement of AI governance testing and evaluation. 

21. UNESCO AI Ethics Recommendation

Within the context of UNESCO’s mandate, this recommendation tackles the ethical issues surrounding AI, emphasizing a normative reflection framework built on interdependent values, principles, and acts. It places a strong emphasis on harm prevention, human dignity, and well-being as fundamental ethics that are based on science and technology. It addresses fundamental ethical aspects of AI systems, such as information processing, learning, reasoning, and decision-making capacities, rather than trying to define AI. The entire AI lifecycle, from creation and research to implementation and use, is affected by ethical issues. The recommendation underscores the significance of responsible practices, critical thinking, and ethical education in digital communities while highlighting the impact of AI on education, research, culture, and communication.

22. G7 Hiroshima Process AI Guiding Principles

Establishing standards for companies creating and utilizing great AI technologies is the goal of the Hiroshima Process International Guiding Principles for Organisations Developing Advanced AI Systems. These guidelines are intended to guarantee the dependability, security, and safety of sophisticated AI systems, such as generative and foundational models. Collaboration between academia, civic society, the commercial sector, and public sector organizations is emphasized in the paper. Adjusting to the changing state of AI technologies expands upon the current set of OECD AI Principles. Respecting democratic values, human rights, and diversity and making sure AI systems don’t seriously jeopardize safety and security are all important components. 

23. ISO/IEC 42001

The international standard ISO/IEC 42001 outlines the conditions that must be met in order for organizations to implement and oversee an artificial intelligence management system (AIMS). It provides essential direction for navigating the morally complex, open, and quickly developing field of AI. ISO/IEC 42001, the first AI management system standard in the world, helps organizations manage the opportunities and risks related to the development and application of AI. It encourages ethical AI procedures, guaranteeing that innovation and regulation are in harmony. This standard promotes confidence and accountability in AI systems globally and is crucial for companies that use or offer AI-based goods or services.

24. ISO/IEC 23894

An information technology standard called ISO/IEC 23894:2023 provides recommendations on risk management for artificial intelligence (AI) to organizations engaged in the creation, implementation, or use of AI products. This document offers procedures for efficient implementation, assisting in the integration of risk management into AI-related activities and operations. The guidelines promote customized approaches to AI risk management and can be adjusted to fit any organization and scenario. Organizations can improve their capacity to recognize, evaluate, and reduce risks associated with AI by adhering to ISO/IEC 23894. 

25. IEEE P2863

Safety, transparency, accountability, responsibility, and bias minimization are among the governance criteria for AI development and use within organizations, as outlined in the Recommended Practice for Organisational Governance of Artificial Intelligence (AI). This standard provides process stages for training, compliance, performance auditing, and efficient implementation of AI governance. The working group is sponsored by the IEEE Computer Society’s Artificial Intelligence Standards Committee and is focused on AI governance. The significance of organized governance frameworks for ensuring moral, responsible, and efficient AI deployment in diverse organizational contexts is emphasized by this active PAR (Project Authorization Request).

26. IEEE P7003

Methodologies for addressing bias in algorithm construction are provided by the Algorithmic Bias Considerations standard. It contains recommendations for managing user expectations to reduce bias from misinterpreting system outputs, guidelines for setting and communicating algorithm application boundaries to prevent unintended consequences, and criteria for validation dataset selection to control bias quality. This standard, which is supported by the Software & Systems Engineering Standards Committee (C/S2ESC), attempts to advance equity and openness in algorithm development. Since its acceptance, it has been in operation and is a crucial tool for addressing algorithmic biases and guaranteeing the ethical application of AI.


This article is inspired by this LinkedIn post.

Tanya Malhotra is a final year undergrad from the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning.
She is a Data Science enthusiast with good analytical and critical thinking, along with an ardent interest in acquiring new skills, leading groups, and managing work in an organized manner.



Source

Related Articles

Back to top button