Trying to keep AI from sneaking into your environment? Good luck! – Computerworld
Enterprise IT today has generative AI hitting it from every angle: from systems that it directly licenses for millions of dollars, from large language models and other genAI algorithms that are sneaking into every SaaS product globally, from employees and contractors using genAI even when they are told that it is prohibited. It permeates every cloud environment, it is creeping into IoT, and it is overwhelming every third-party app your company leverages.
With SaaS and its overwhelming embrace of all things genAI, IT decision-makers are not deciding AI strategy as much as reacting to it. “AI software has come into most offices and organizations without the CIOs and CTOs being aware,” said Atefeh “Atti” Riazi, CIO of the $12 billion media firm Hearst, which today has more than 350 brands and thousands of third-party vendors. Many of these executives “have at least 50 apps on their phone, and no one is aware” of precisely what they are capable of in terms of extracting and using sensitive data, she added.
On the other hand, she said, an enterprise can’t go to the other extreme and try to lock everything down. “You can’t be that strict, because then you would choke off the ability of organizations to innovate.”
Nevertheless, IT leaders want to wrestle back control of their systems, lest “Sneaky AI” — software vendors adding AI components to their products without explicitly telling customers — take over. Some advocate adding new legalese to contracts, regulating how and where genAI can be used and sometimes requiring permission to implement it. Others, including Riazi, are more pessimistic and argue that wholesale changes are needed for IT governance because of generative AI.
Riazi’s position is that current IT governance rules were created in a vastly different environment, time, and place, back when physical assets were the most important and when most if not all critical systems were housed on-premises. “Auditing and governance is very much structured for a physical world,” she said.
“Today it is almost impossible to know all of the AI code that has been put in [enterprise software] and its impact. This is not governable. Throw out the window” current IT governance procedures, Riazi said.
“Within three years, programmers will not be writing most code. AI will,” she predicted, with 60% to 70% of code being created by AI in that timeframe. “We can’t manage and govern this [AI] space [in the old ways].”
Anna Belak, director of the office of cybersecurity strategy for Sysdig and formerly a Gartner analyst, agreed with Riazi’s assessment that the explosion of AI means that enterprise IT should fully rethink its governance tactics.
“Why not? IT has never been good at governance anyway,” Belak said. “A new form of governance? That’s not a huge leap from the nonexistent governance we have today” in many enterprises.
Indeed, Belak said the need for change in IT governance is not solely about AI. “As you go to more cloud and Kubernetes and whatnot, it is harder for anyone to explain precisely what is going on. LLMs are simply the breaking point.”
Compliance and other conundrums
A massive challenge with AI is regulatory compliance in all its painful forms. Many governments at the city, state, and federal levels are requiring companies to place limitations on what AI can do in their companies.
Such AI awareness — let alone the controls required to be compliant with many AI regulations — is often not viable, Belak said. “The AI rules are ludicrous, and you can’t really be compliant. You have no idea if [generative AI] is secure. And yet it is training on all of your data.”
“GenAI is like a first-year consultant. They feel like they have to say something, even if it is wrong.” — David Ray, chief privacy officer, BigID
A related generative AI problem is not merely what data it accesses, but what can it infer or guess based on that data? Those speculative efforts from genAI may turn out to be more problematic than anything else, said David Ray, chief privacy officer at BigID, a data security, privacy, compliance, and governance provider.
“The information being collected at the prompt is often being collected in ways that people don’t realize. Things like age and gender can be inferred,” he said.
Ray pointed to “model drift” as especially worrisome. “As the AI evolves, can you trust it?” He added that the hallucination problem with genAI can’t be overstated. “GenAI is like a first-year consultant. They feel like they have to say something, even if it is wrong.”
IT must prioritize its understanding of access and visibility within any and all AI programs in the enterprise environment. “You absolutely have to understand what it is doing with that data. Will it show documents or emails from HR that weren’t properly locked down?” Ray said. “If any of these factors change, you have to be notified and be given the ability to opt out.”
Updating vendor contracts for AI
Not surprisingly, attorneys are among those who say enterprises can protect themselves by updating software licensing contracts to reflect the new AI reality. David Rosenthal is a partner at VISCHER, one of Switzerland’s largest law firms; he specializes in data and technology issues as well as compliance rules. Rosenthal advocates for such contract changes, and he recently posted on LinkedIn a list of specific suggested contract additions for AI protections.
But Rosenthal stressed that executives must first agree on an AI definition. Many typical business functions are fueled by AI but not usually seen as AI. “AI is simply a system that has been trained instead of only programmed. Programs that take a PDF and convert that picture into readable text — that [OCR function] is AI. We have to be careful about what is referred to as AI.”
The AI — and especially generative AI — trend “has increased the concern for our clients [who are worried] that they may be relying on technology that they can’t really control,” Rosenthal said. “They often don’t have the [IT] maturity to control their suppliers.”
Another attorney, Andrew Lee, a partner at the Jones Walker law firm in New Orleans and a member of that firm’s corporate compliance group, said that contract changes may not prevent AI compliance issues, but they might mitigate them somewhat.
“It’s natural for lawyers to think they can do a one-way solution by doing a contract that restricts a vendor,” such as a clause that indemnifies the customer if a vendor upgrades a product with an LLM component and unwittingly gets the enterprise into compliance difficulties, Lee said. “Maybe I don’t solve [the problem], but I can try and shift the liability.”
Where vendor controls fall short
Setting contractual obligations for vendors does not necessarily mean that they will be honored, especially if no one is routinely checking, BigID’s Ray pointed out. “Some companies will say whatever they have to get through the contracting process,” he said.
Hearst’s Riazi is also skeptical that trying to rein in the AI explosion via new contract requirements would work “We have many terms and conditions in our contracts” today, but they are often not enforced because “we have no bandwidth to check,” she said.
“[AI] is simply not governable through traditional means.” — Atti Riazi, CIO, Hearst
Douglas Brush, a special master with the US federal courts and consultant, is another who doubts that vendor controls are going to do much to meaningfully address the AI problem.
“I think it would be improbable to expect [a vendor] to put in the technical controls to appropriately mitigate the AI risks. I can’t see a way, because [AI] is still a black box. There’s not nearly enough transparency,” Brush said. “GenAI tends to vacuum up everything, all the way through the operating system.”
Another attorney, EY managing director Brian Levine, takes his concern about vendor restrictions one step further, saying it’s legally questionable whether such contractual demands are even appropriate.
“Historically, we don’t get to know how our vendors’ software and products work. We expect them to be fit for purpose. I don’t think companies should be under an obligation to show how they are achieving results,” Levine said.
In the end, it may not matter whether vendors won’t, can’t, or shouldn’t be required to divulge every use of AI in their products — one way or another, AI is going to sneak in. Which brings us back to Riazi’s call for wholesale changes in IT’s approach to governance.
“We still look at these things from an old perspective,” she said. “It is unprotectable at some point. [AI] is simply not governable through traditional means.”