AI

JAMS issues rules governing disputes involving artificial intelligence systems


Judges are beginning to address the increasing use of AI tools in court filings—including reacting to instances of abuses by lawyers using it for generative purposes and requiring disclosures regarding the scope of AI use in the drafting of legal submissions.  Now JAMS, the largest private provider of alternative dispute resolution services worldwide, has issued rules—effective immediately—designed to address the use and impact of AI.

The Upshot

  • Alternative Dispute Resolution (ADR) providers are now joining courts in trying to grapple with the impact of AI on the practice of law.
  • As they should when dealing with courts, litigants need to pay close attention to the rules of a particular forum as they pertain to AI.
  • When choosing an arbitration forum in agreements, there may be reasons to choose a forum with AI rules and to specify that those procedures be followed. 

The Bottom Line

AI is reshaping the legal landscape and compelling the industry to adapt.  Staying up-to-date with these changes has become as fundamental to litigation as other procedural rules.  The Artificial Intelligence Team at Ballard Spahr monitors developments in AI and is advising clients on required disclosures, risk mitigation, the use of AI tools, and other evolving issues. 

Judges are beginning to address the increasing use of artificial intelligence (AI) tools in court filings—including reacting to instances of abuse by lawyers using it for generative purposes and requiring disclosures regarding scope of use in the drafting of legal submissions—by issuing standing orders, as detailed here.  Staying up-to-date with these changes will soon be as fundamental to litigation as other procedural rules.

In line with court trends, Judicial Arbitration and Mediation Services (JAMS), an alternative dispute resolution (ADR) services company, recently released new rules for cases involving AI.  JAMS emphasized that the purpose of the guidelines is to “refine and clarify procedures for cases involving AI systems,” and to “equip legal professionals and parties engaged in dispute resolution with clear guidelines and procedures that address the unique challenges presented by AI, such as questions of liability, algorithmic transparency, and ethical considerations.”

Although courts have not settled on a definition yet for AI, JAMS took deliberate steps to define AI specifically as “a machine-based system capable of completing tasks that would otherwise require cognition.”  Such change makes the scope of its rules clearer.  Additionally, the rules encompass an electronically stored information (ESI) protocol approved for AI cases, along with procedures for overseeing the examination of AI systems, materials, and experts to accommodate instances where the existing ADR process lacks adequate safeguards for handling the intricate and proprietary nature of such data.

Specifically, the procedures dictate that before any preliminary conference, each party must voluntarily exchange their non-privileged and relevant documents and other ESI.  JAMS suggests that prior to such exchange, the parties should enter into their AI Disputes Protective Order to protect each party’s confidential information.

The form protective order, which is not provided under JAMS’s regular rules, limits the disclosure of certain designated documents, and information to the following specific parties: counsel, named parties, experts, consultants, investigators, the arbitrator, court reporters and staff, witnesses, the mediator, author or recipient of the document, other persons after notice to the other side, and “outside photocopying, microfilming or database service providers; trial support firms; graphic production services; litigation support services; and translators engaged by the parties during this Action to whom disclosure is reasonably necessary for this Action.”  The list of parties privy to such confidential information does not include any AI generative services, a choice that is consistent with the broad concern that confidential client information is currently unprotected in this new AI world.

The rules further provide that in cases where the AI systems themselves are under dispute and require production or inspection, the disclosing party must provide access to the systems and corresponding materials to at least one expert in a secured environment established by that party.  The expert is prohibited from removing any materials or information from this designated environment.

Additionally, experts providing opinions on AI systems during the ADR process must be mutually agreed upon by the parties or designated by the arbitrator in cases of disagreement.  Moreover, the rules confine expert testimony on technical issues related to AI systems to a written report addressing questions posed by the arbitrator, supplemented by testimony during the hearing.  These changes recognize the general need for both security and technical expertise in this area so that the ADR process can remain digestible to the arbitrator/mediator, who likely has no, or limited, prior experience in the area.

While JAMS claims to be the first ADR services company to issue such guidance, other similar organizations have advertised that their existing protocols are already suited to the current AI landscape and court rules.

Indeed, how to handle AI in the ADR context has been top of mind for many in the field. Last year, the Silicon Valley Arbitration & Mediation Center (SVAMC), a nonprofit organization focused on educating about the intersection of technology and ADR, released its proposed, “Guidelines on the Use of Artificial Intelligence in Arbitration.”  SVAMC recommends that those participating in ADR should use their guidelines as a “model” for navigating the procedural aspects of ADR related to AI, which may involve incorporating such guidelines into some form of protective order.

In part, the clauses (1) require that the parties familiarize themselves with the relevant AI tool’s uses, risks, and biases, (2) make clear the parties of record remain subject to “applicable ethical rules or professional standards” and that parties are required to verify the accuracy of any work product that AI generates, as that party will be held responsible for the inaccuracies, and (3) provide that disclosure regarding the use of AI should be determined on a case-by-case basis.

The SVAMC guidelines also focus on confidentiality considerations, requiring that the parties redact privileged information prior to inputting them into AI in certain instances.  SVAMC even goes so far as to make clear that the arbitrator themselves cannot substitute their own decision making power for AI’s.  SVAMC’s Guidelines are a useful tool for identifying the significant factors that parties engaging in ADR should contemplate, and that the ADR community at large is contemplating. 

As courts provide additional legal guidance, and more related AI-use issues arise, we expect that more ADR service companies will move in a similar direction as JAMS, and potentially adopt versions of SVAMC’s guidance, as the procedures and technology continues to evolve.



Source

Related Articles

Back to top button