Generative AI

D&O Risks to Consider When Exploring the New Frontier of Gen AI


Is the world being overtaken by robots? While the answer to that question is clearly “no” – or at least, “not yet” – it seems that the focus of nearly every industry and business media outlet is artificial intelligence.

This includes the insurance industry. Everyday life is ingrained with AI, such as voice assistants, stock portfolios, news, and advertisements.

These more traditional forms of AI have seamlessly melded into the background of society. The future sits with newer and more advanced forms of generative AI, such as ChatGPT, which can create original content to transform the business world. It can also create new risks along with it.

A February 2024 report from management consultant McKinsey titled, “Beyond the hype: Capturing the potential of AI and gen AI in tech, media and telecom,” suggests that AI will add in excess of $2 trillion to the economy annually.

Accordingly, businesses are in a race to integrate AI into their processes. The risks that arise from the race to automation and machine intelligence will no doubt impact insureds in nearly every industry and sector.

How and Where AI

Is Being Used

Most companies are seemingly using AI in some capacity and those numbers are probably understated with each passing day. Businesses are using generative AI to gather, analyze, and synthesize large sets of data that can cut down decision making time and actively assist in corporate decision making and strategy.

Some industries naturally appear to be adopting AI at a higher rate than others. These early adopters may experience higher levels of risk in the near term. For example, consumer goods and retail, tech, financial services, professional services and healthcare each experienced nearly double digit growth in the use of AI in the last several years, according to an August 2023 report by software company Magnifi titled, “Which Industries Are Adopting AI the Most and Least?”

Some of the uses for AI include analyzing data to set prices and manage inventory, monitoring compliance and cybersecurity risk, analyzing submissions including patient data and claims, credit scoring, and even investment decisions. The common theme throughout all industries appears to be analyzing the specific data sets held by companies to produce faster or more accurate outputs to inform strategy and decision making.

This reliance on data analysis leads to potential risks posed by the use of AI. What happens if the data inputs are inaccurate or low quality? Whom does the data belong to and how is it being used? Is data being kept safe? Each of these questions leads to unique risks that companies and individuals will look to their insurers to cover.

The Insurance Risks

Posed by AI

One of the biggest likely risks to insurers currently regarding AI is the unknown nature of potential liability. AI’s continued development and improvement and its ever-increasing uses make the extent and sources of liability unclear. With growing interest in this topic, and some risks coming to fruition, the future threats are seemingly becoming clearer for insurers. There is naturally some educated guess work.

For now, significant risk sits with AI developers, in addition to those developing AI systems and products. As such, we can expect more direct actions against AI developers and their directors and officers arising from how the AI products they have created are being used. These lawsuits are likely to take the form of intellectual property and privacy related lawsuits.

For example, generative AI developers have been faced with copyright and trademark lawsuits arising from data used to train AI models, outlined in law firm K&L Gates’ September 2023 report titled, “Recent Trends in Generative Artificial Intelligence Litigation in the United States.”

The report points to one of the most well-known cases alleging copyright infringement with the use of generative AI: Andersen v. Stability AI Ltd.5. In this case, plaintiffs Sarah Andersen, Kelly McKernan, and Karla Ortiz, on behalf of a putative class of artists, alleged that Stability AI Ltd. and Stability AI Inc. and others scraped billions of copyrighted images from online sources without permission to train their image-generating models to produce seemingly new images without attribution to the original artists.

In February 2023, a trademark lawsuit arose against Stability AI with similar claims. In Getty Images (US), Inc. v. Stability AI, Inc., media company Getty Images Inc. filed suit against Stability AI asserting claims of copyright and trademark infringement. The complaint also claims that Stability AI scraped Getty’s website for images and data used in the training of its image-generating model, Stable Diffusion, with the aim of establishing a competing product or service, according to K&L Gates’ report.

Training AI models can be done with a variety of sources including text, images, or music. Training AI potentially involves types of work protected by copyright laws. A high-profile example is The New York Times lawsuit against OpenAI and Microsoft, in which The New York Times Co. sued Microsoft Corp. and OpenAI Inc. for the use of content to help develop artificial intelligence services. Privacy lawsuits are being brought based on the same premise, that AI developers are using private information to train their AI models.

‘AI’s continued development and improvement and its ever-increasing uses make the extent and sources of liability unclear.’

Other lawsuits include data privacy from unauthorized facial scanning and libel lawsuits arising from AI hallucinations, a phenomenon in which generative AI creates false information and presents it as true. In some cases, these situations include convincing explanations of where the false information comes from.

These direct risks are expected to expand. More traditional economic pressures may arise, as well. As more AI developers join the frenzy, it is possible that those participants may come under great financial pressures to chase market leaders. Those late entrants and their backers may face multiple risks in a more crowded space.

The new frontier of risks appears to be against companies using AI in their own business. The main foreseeable danger is related to disclosures. Disclosure risks can take two contrary forms – either a lack of disclosure or overselling AI usage, now coined as AI-washing.

Likened to green-washing, which is when a false impression or misleading information is given about how a company’s products are environmentally sound, AI-washing involves companies touting their use of AI to attract certain investors or attention. The reality may be that the company only uses AI minimally or not at all.

For example, imagine a financial services company advertising its extensive use of AI in its investment portfolio. This sounds like the company is using AI to inform investment decisions. Now suppose the company is actually using AI in investor communications or administrative tasks. A resulting claim could plausibly take the form of a securities fraud case.

In fact, securities class actions and regulatory investigations often stem from lack of or allegedly inaccurate disclosure of information to shareholders. AI-based securities class action lawsuits against companies and their D&Os for failing to fully disclose how they are using AI are likely on the horizon.

Another avenue of potential risk is based on breaches of fiduciary duty and lack of oversight by D&Os. As regulatory and legislative interest in AI increases, the burden on D&Os to understand new regulations and comply with them also increases. Regulatory investigations, and eventually lawsuits, could arise from the failure to comply with certain, and perhaps yet unknown, regulatory requirements. The risk of this increases where companies are racing to implement AI programs for a competitive advantage or perhaps to make up for lost ground. If and when companies and their D&Os do rely on AI for processes or decision making, there is a risk AI will get it wrong or produce inequitable, discriminatory, or even absurd results. This is all dependent on the underlying programming and human input.

The issues are compounded by the reality that the AI will repeatedly make the same mistakes over time, leading to larger claims if problems are not identified quickly. Failing to adequately disclose or promptly address those issues can also lead to management liability challenges.

Other categories of risks can include employment-related risks related to hiring decisions and bias that can inadvertently be built into AI models based on historical data. Professional liability cases including lawyers, financial institutions, accountants, or healthcare providers may similarly face risks from relying on AI to cite cases, make investment decisions, or decide a treatment plan either without checking the veracity of the results produced by AI against human knowledge or from being fooled by AI hallucinations.

Examples of AI Risks

Many of the risks posed to insurers and their insureds are unknown and based on educated guesses.

Some of the examples mentioned, such as The New York Times lawsuit against OpenAI and Microsoft, are examples of common lawsuits that D&O and management liability insurers may already see. However, these examples happen to involve AI companies.

Two AI based securities class actions have also recently been filed.

The first was filed in February 2024 against Innodata, a data engineering company that assists clients with the use, management and distribution of their digital data, along with its directors and officers. The lawsuit alleges that the defendants made false and misleading statements about the extent of the company’s use of and its investments in AI. The lawsuit also alleges that rather than being powered by innovative and proprietary AI, the company was actually utilizing low wage offshore workers.

The second class action lawsuit was filed in March 2024 against Evolv Technologies Holdings Inc., a weapons detecting security screening company. Evolv allegedly held itself out as a leader in AI weapon detection and promoted its flagship product as being able to detect firearms, explosives and knives. The lawsuit alleges that Evolv overstated its use of AI or the effectiveness of the security results obtained of the AI. Specifically, Evolv allegedly hid test results showing failures to detect weapons in screening. The lawsuit also alleges that both the FTC and SEC had begun investigations into the company. These types of disclosure-based lawsuits involving AI companies may arise with increased frequency.

Finally, increased regulatory scrutiny is already developing into investigations and actions. This momentum will most likely develop into enforcement litigation concerning representations and disclosures concerning AI.

In March 2024, the SEC announced two settled charges against two investment advisers, Delphia (USA) Inc. and Global Predictions Inc. The two charges asserted that the advisers made false and misleading statements about their use of AI, both marketing greater use of AI than was actually in place.

AI is becoming more of a risk to D&O insurers even though its current form of disclosure related risks will likely resemble many of the other risks those insurers face in other industries. AI is rapidly changing the business landscape as its growth and reach appear potentially staggering.

Corporate disclosure and risk managers are catching up to these changes. It is likely there will be a period of change and recalibration during this time and insurers will need to consider these rapidly evolving dynamics.

Ferguson is a partner in the New York office of Kennedys where he focuses on domestic and international insurance matters, where he primarily focuses on directors and officers (D&O) liability, professional liability and financial institutions liability

Gammell is an associate at Kennedys and focuses domestic and international insurance coverage disputes including those arising under directors’ and officers’ (D&O) liability, professional liability, financial institutions liability and employment practices (EPL) liability.

Topics
New York
InsurTech
Data Driven
Artificial Intelligence
Directors Officers



Source

Related Articles

Back to top button