Generative AI

Generative AI in law: The good, the bad and the ugly


Earlier this year, a New York law firm made headlines after getting blasted by a judge for using ChatGPT to estimate its legal costs.

Among documents submitted to the court, Cuddy Law Firm included an analysis from OpenAI’s generative artificial intelligence (AI) tool. Lawyers from the firm had asked the chatbot how much they should charge their client for the case they’d just won — and tried to use it to help justify the $113,484.62 bill.

Judge Paul Engelmayer was not pleased. He said that ChatGPT wasn’t a good tool for calculating legal fees because the platform didn’t have the right information to make an informed decision. He called the move “utterly and unusually unpersuasive.”

While legal tech companies are pushing tools into the market, lawyers are still in the dark about how to use generative AI, let alone the best ways.

However, even though this technology is still evolving, there are ways to use generative AI to make legal work easier.

Generative AI tools on the market

Generative AI refers to technology that generates content based on prompts entered into the system. For example, a tool like ChatGPT can generate questions to ask a client on an intake form or create an image for a blog post about how to find a lawyer.

ChatGPT may be the most talked about AI tool, but it’s not the only one on the market. For law firms, the most common tools include Microsoft’s Copilot, which is integrated into Office 365, Google’s Gemini, and Claude AI from Anthropic, which is competing with OpenAI to create the most human-like AI tool.

Legal tech companies are quickly adopting generative AI tools such as CoCounsel, a legal assistant that can review documents, help draft communication, and search documents.

There’s also Lexis AI+ from LexisNexis, which can summarize case law, help draft client emails, and search for cases.

In Canada, CanLII is rolling out a generative AI tool that summarizes case law, legislation, and administrative tribunal decisions. It’s available in Alberta, Saskatchewan, Manitoba, Prince Edward Island and the Yukon.

What makes these legal AI tools different from ChatGPT is that they’re built on existing legal databases to ensure more accurate information.

Jennifer Wondracek, a law school professor and director of the law library at Capital University Law School in Ohio, recently spoke at the American Bar Association’s ABA TECHSHOW about how lawyers should be using generative AI.

“AI has been around since 1955,” she says. “We have Siri, Alexa and so many other AI products around us. What’s important is figuring out what you need and understanding how AI works.”

Wondracek recommends reading the terms and conditions of any generative AI tool before using it, as it can present privacy and confidentiality issues. These tools are trained on data, and where that data comes from is the main issue for large language models like ChatGPT and Copilot. Currently in the United States, OpenAI, Anthropic, and Google are among the companies facing lawsuits for copyright infringement.

Users also have to opt-out to ensure their data isn’t being used to train the software. Even then, there can be loopholes. Law firms have recently discovered that Microsoft’s privacy policy allows the company’s employees to read AI prompts to identify any hate or violent content.

While some of these tools have free access, those versions won’t protect your data.

“You have to buy licenses because it has stronger privacy terms,” Wondracek says. “But even paid personal licenses may not have the data protection clauses you need for legal work, so look for enterprise or business-oriented licenses.”

Sanjay Khanna became the chief information officer at Cox and Palmer last year at the beginning of the ChatGPT craze. He’s excited about the technology but wants firms to think about governance and compliance before deciding what to use.

“The legal industry can benefit greatly from generative AI due to the amount of data that can be used to create new content for legal and non-legal matters,” Khanna says.

“It’s important that firms assess and analyze the data utilized and ensure that client confidentiality and legal regulatory standards are followed.”

Khanna says getting the best results from generative AI requires organization. At Cox and Palmer, that’s meant streamlining information management processes on how documents are stored, searched, and retrieved.

How lawyers can use generative AI

Here’s the thing: generative AI is only as good as what you ask it. That’s why prompt engineering is quickly becoming one of the many skills tech companies are clamouring for. It’s the practice of creating inquiries for AI tools that will produce optimal outputs. It’s more sophisticated than searching for a phrase or words since most generative AI tools also allow users to upload documents.

“Think about what you want and how to ask for it,” says Wondracek. “When I’m researching children under the age of 18, I have to think about the different ways we name people that age: minors, children, infants, etc. Think about what jurisdiction you’re looking for too.”

Then there’s the issue of hallucinations — the phenomenon where generative AI tools create false information as a result of trying to fill in data gaps in the system’s database. While some legal AI companies claim to be hallucination-free, Wondracek says they can’t be fully eliminated.

“Legal AI companies are sitting on top of legal databases and that helps to try to mitigate them,” she says, adding that ultimately, hallucinations are part of the package.

“It’s a trade-off. You can tweak the underlying commands used to retrieve data but hallucinations will still occur.”

One of her tips is to always check your work. Guidelines from law societies in British Columbia, Alberta, and Saskatchewan all require lawyers to check what they’ve generated from AI tools. Lawyers must also ensure confidential client information isn’t entered into generative AI systems.

Wondracek says verifying information means reading every case you’ve cited to ensure the right law has been referenced and to confirm that the court cases exist.

The common thread when lawyers have been caught using fake citations? They’d all used ChatGPT for legal research, something Wondracek says should be a no-go.

Instead, lawyers should think of generative AI as a great place to start or finish — “the sandwich method” of doing legal work. For instance, start with generative AI tools to create questions for discovery or come to it after completing a draft of oral arguments to help make them more persuasive.

“You can have it write a high-level memo or look at contract clauses. You can feed it long PDFs and ask questions about the document. There are many mundane tasks generative AI can do,” Wondracek says, noting she uses it to create onboarding checklists for new employees.

“But ChatGPT is not intended to do research.”

While mundane may not sound as exciting as having the technology do all the work, she says it can help lawyers make things easier and faster in their legal practice.

“There’s lots of other things generative AI can do. Lawyers need to explore and figure out what works for them.”


Julie Sobowale is a journalist who covers law and technology.



Source

Related Articles

Back to top button