Gallagher Updates Regulation for Artificial Intelligence
Gallagher’s Cyber practice remains laser focused on emerging technology and the potential for increased risks as organizations begin to use them. Throughout 2024, we’re concentrated on evolving compliance requirements for the use of artificial intelligence (AI). Recent AI-specific regulatory proposals in the state, federal and international arenas bear watching. This summary follows our Q1 summary, The Latest Regulation for Artificial Intelligence, with important updates.
State Regulation
As of this writing, 17 states have proposed legislation focusing on AI regulation: California, Colorado, Connecticut, Delaware, Illinois, Indiana, Iowa, Louisiana, Maryland, Montana, New York, Oregon, Tennessee, Texas, Vermont, Virginia and Washington.
- Four states focus on interdisciplinary collaboration: Illinois, New York, Texas and Vermont.
- Four states focus on protection from unsafe or ineffective systems: California, Connecticut, Louisiana and Vermont.
- Eleven states focus on protection from abusive data practices: California, Colorado, Connecticut, Delaware, Indiana, Iowa, Montana, Oregon, Tennessee, Texas and Virginia.
- Three states — California, Illinois, Maryland — and New York City focus on transparency.
- Three states focus on protection from discrimination: California, Colorado and Illinois.
- Twelve states focus on accountability: California, Colorado, Connecticut, Delaware, Indiana, Iowa, Montana, Oregon, Tennessee, Texas, Virginia and Washington.
Federal and Industry Sector Regulation
On March 27, 2024, the US Department of Treasury released a report, Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector. It contains recommendations to help financial institutions utilize AI technologies safely and effectively while safeguarding against threats to operational resiliency and financial stability. The report outlined key items to address AI-related operational risk, cybersecurity, and fraud challenges, which include:
- Addressing capability gaps
- Narrowing the fraud data divide
- Regulatory coordination
- Expanding the National Institute of Standards and Technology (NIST) AI risk management framework
- Best practices for data supply chain mapping and “nutrition labels”
- Explainability for black-box AI solutions
- Gaps in human capital
- A need for a common AI lexicon
- Untangling digital identity solutions
- International coordination
On March 28, 2024, the US Office of Management and Budget released a memorandum that requires government agencies to hire chief AI officers (CAIOs). They’ll be responsible for:
- Promoting AI innovation
- Coordinating agency use of AI
- Managing risks from the use of AI
- Expanding the reporting of AI use case inventories
Global Regulation
On March 13, 2024, the European Union’s Artificial Intelligence (AI) Act was passed. It aims to establish a comprehensive legal framework for worldwide use of AI. Its primary objectives are to foster trustworthy AI in Europe and beyond by ensuring that AI systems respect fundamental rights, safety and ethical principles while addressing risks associated with powerful and impactful AI models.
The key points from the AI Act include the following.
Risk Classification
The AI Act classifies AI systems based on risk:
- Unacceptable risk: Certain AI systems (e.g., social scoring systems and manipulative AI) are prohibited.
- High-risk AI systems: These systems are regulated and subject to extensive obligations. Providers (i.e., developers) of high-risk AI systems must comply with requirements related to transparency, safety and accountability.
- Limited risk AI systems: These systems — including chatbots and deepfakes — are subject to lighter transparency obligations, as long as users are aware the content is AI generated.
- Minimal risk AI systems: Systems such as AI-enabled video games and spam filters remain unregulated.
Most obligations fall on providers of high-risk AI systems intending to use the systems within the EU or use their output within the EU.
General-Purpose AI
- All general-purpose AI (GPAI) model providers are required comply with the terms of the Directive on Copyright in the Digital Single Market (also called the Copyright Directive). They’re also required to educate users with instructions to use the platform with written documentation on technical terms.
- Any GPAI models that present a systemic risk have a mandate to conduct model evaluations and adversarial testing, document and report incidents considered serious, and take steps to implement cybersecurity controls.
Prohibited AI Systems
The AI Act prohibits certain types of AI systems:
- Those deploying subliminal, manipulative, or deceptive techniques to distort behavior and impair informed decision-making, causing significant harm
- Those exploiting vulnerabilities related to age, disability, or socio-economic circumstances to distort behavior, causing significant harm
- Biometric categorization systems inferring sensitive attributes (e.g., race, political opinions, sexual orientation), except for specific lawful purposes
Deployers of AI systems
Deployers of high-risk AI systems have obligations, though less than providers. This applies to deployers located in the EU and third-country deployers where the AI system’s output is used in the EU.
Risk Management Strategies
Any organization that these new AI compliance requirements may impacted should take steps to communicate the requirements to key stakeholders across the enterprise.
Organizations should also be aware of the rapidly evolving Cyber insurance products that may impact the scope of insurance coverage for AI-related losses in 2024. Heightened regulatory risk has spurred some cyber insurers to use various methods to reduce their cascading losses for regulatory risk exposure around the use of technology. Sub-limits and coinsurance are often imposed for certain cyber losses. In addition, some carriers have modified Cyber insurance policy language to restrict or even exclude coverage for certain incidents that give rise to costs incurred for regulatory investigations, lawsuits, settlements and fines.
Many Cyber insurance policies provide some form of cyber risk services, including regulatory compliance guidance. These services can prove useful in navigating the complex and evolving AI rules and regulations.
In summary, today’s regulation around AI is cutting across multiple industry sectors and jurisdictions — including financial services, healthcare, technology, education, real estate and municipalities — and will undoubtedly spread to others in short order. Any organization considering embracing generative AI tools should consider embedding a formal risk management plan for AI usage into their overall enterprise risk management program. A cross-divisional effort between several key stakeholders will be required. Risk managers should look to coordinate efforts between legal, compliance, human resources, operations, IT, marketing and others, while closely monitoring emerging risk as AI systems become more widely used.
Topics
California
Texas
New York
Legislation
InsurTech
Data Driven
Artificial Intelligence
Virginia
Connecticut
Illinois
Iowa
Oregon
A.J. Gallagher
Tennessee
Vermont
Colorado
Delaware
Interested in Ai?
Get automatic alerts for this topic.