Cybersecurity, generative AI and all its curiosities
The 2024 International Legal Technology Association (ILTA) EVOLVE conference combined generative artificial intelligence and cybersecurity in Charlotte, North Carolina, in early May.
The panels generally pingponged between those two, quite broad topics and occasionally intersected, with some big take-aways.
Below, Legaltech News gathered some quotes that were particularly revealing of the current state of the industry and of what’s to come.
Dean Sapp, senior VP of InfoSec Risk and Compliance, Filevine
“The cybersecurity insurance industry has gotten very interesting lately.
Having gone through several renewals and looking to add additional coverages, one of the basic consensuses I’ve come up with is if you can get it, you probably don’t need it. Because your program is already robust enough.
I had to fill out a ransomware security questionnaire [which] went in-depth on everything we’re doing to stop ransomware. Then I had to fill out a cloud security questionnaire that talked about every cloud vendor, every component in any of our open-source code and [an] in-depth analysis of our code and our code stack. Then we have to talk about operational security, IoT devices, IT security, the full gamut. So once you’ve gone through that six to eight week process of talking to the auditors that specialize in each of those risk areas, and then getting them comfortable—you really don’t need cybersecurity anymore.”
Kristen Sonday, CEO and founder, Paladin
“I pitch investors all the time [in the access to justice legal tech space]—and the one pushback I get repeatedly is ‘The market’s not big enough, the market’s not big enough.’
Up to 5.1 billion people do not have meaningful access to justice. And if that is not a big enough market, then I don’t know what is—like, I should just leave the industry entirely.
But there’s a huge opportunity for us to invest in these tools. Obviously, not just for our regular day-to-day billable work, [but] to figure out more creative ways to export them to the community.
And we are starting to see a lot of firms spin up really interesting projects for local legal services organizations and talk about other work in their community, police departments, court systems, as a way to pilot new tools and get other folks across the firm [involved in] all these projects.”
Zach Abramowitz, CEO and founder, Killer Whale Strategies
“Have you noticed in the headlines that a lot of law firms have been building their own AI tools? Why are they doing this? Like, why aren’t they just using a startup? And the answer is simple.
It’s not clear that you have to be working with startups anymore. The reason you worked with a startup in the SaaS period was that they were building the AI. They didn’t have to build the infrastructure and [could] just focus on the software [and] the user experience. The big part of what they were building was the AI which [law firms] couldn’t build.
But today, a lot of law firms [are] looking around saying, ‘I don’t see any reason why we necessarily need to work with a startup. Maybe this is something we can build ourselves.’
I think the jury’s still out on whether that’s true or not. But the concept of the build-buy calculus has shifted, that’s for sure.”
Manish Agnihotri, co-founder, Coheso
“[Large language models], for the first time in technology history, pose a new challenge when it comes to input and output validation.
So I can think of a parallel to prompt injection in SQL—where people would manipulate what goes into the system and get access to certain database records or maybe manipulate the database.
What LLMs do is they’re incredibly powerful in generating realistic text. And what that means is that you can not only manipulate the input, but you can manipulate the input in a way that you can manipulate the output. It can generate very good text and very good code. And what that means is you can execute the system, can execute the code, you can very well do privilege escalation, you can do remote code execution, and that can be a whole new challenge.
So the difficulty [with LLMs is] not only that the input side is problematic but output side is problematic as well.”
Kenny Leckie, senior technology and change management consultant, Traveling Coaches
“I can’t get away from the fact that we need to educate users [and] leadership [on generative AI adoption] and this is not something there’s a lot of great information about how to work together.
This is a discussion [your] leaders [and] product [teams need to have in order to] have the right lens and filter to make the right decisions for the use of generative AI in your firm.
[When it comes to educating your leaders] learn to speak in the language of the business and the firm, particularly as it relates to generative AI … learn the language of the firm.”