Using Artificial Intelligence in Business: What to Know | Profiles
“The most important thing for a business to consider when it uses AI is the ethical implications and ensuring transparency, fairness, and accountability in its implementation and use.”
That’s a verbatim response from ChatGPT when 425 Business prompted, “In one sentence, tell me what the most important thing is for a business to consider when it uses AI.”
When asked, “Can I trust AI if I use it in my business?” ChatGPT replied: “Trust in AI depends on various factors, including the quality of the data it’s trained on, the transparency of its algorithms, and the ethical considerations guiding its implementation; therefore, careful evaluation, monitoring, and oversight are crucial to ensure its reliability and effectiveness in your business operations.”
Finally, when asked, “Should I consult a lawyer before using AI in my business?” ChatGPT responded, “Yes; consulting a lawyer before implementing AI in your business is highly advisable, as they can provide valuable legal guidance regarding data privacy regulations, intellectual property rights, liability issues, and compliance with relevant laws and regulations, helping to mitigate risks and ensure your AI initiatives align with legal requirements and best practices.” Those replies were delivered in a couple seconds.
And they came a couple days after 425 Business spoke with Melanie Seelig, a partner and intellectual property lawyer at Knobbe Martens in Seattle, about general issues for businesses to consider when using artificial intelligence tools — generative AI in particular. Seelig is among nearly 300 lawyers and scientists across the firm’s seven U.S. offices. All are focused on IP and tech law, and most hold science or technical degrees. Seelig holds bachelor’s degrees in mathematics and physics and largely works with technology clients. She is a longstanding Sammamish resident and has represented companies from the Eastside for more than 25 years.
Most of those clients are asking about generative AI at some level, Seelig said. Some are worried about it, others not so much.
First a definition of generative AI, per a handout from the Bellevue Chamber at its Eastside Leadership Conference on “all things AI” last October: “A set of artificial intelligence systems and algorithms that can create text, audio, video, and simulations.”
ChatGPT is a form of generative AI-based tool that is trained specifically to generate textual outputs.. At the chamber conference, one speaker, Tom Lawry, shared, “The smart folks at McKinsey say we’re going to experience more technological progress in the next decade than we have in the past 100 years.” Lawry, who is managing director of Second Century Tech, an AI transformation adviser to the health care industry, and a former Microsoft executive, added, “The only thing I would debate about that is I think 10 years is too long.”
His comment represents the speed at which technology and AI itself are moving and impacting the world. While AI isn’t new — think machine learning, for example — generative AI is advancing at warp speed.
Given the widespread public availability of many AI-based tools, such as ChatGPT, companies are knowingly or unknowingly incorporating AI-based tools into various aspects of their business processes, Seelig said.
“Every company’s trying to figure out, ‘How can we incorporate this into our business, and how can we do it in a safe, reliable, and a thoughtful type of manner?’” Seelig said.
The benefit of generative AI tools, such as ChatGPT, Dall-E 2, Midjourney, and others come from large language models that are often trained on billions of pieces of data. The need for larger amounts of data for training, however, also creates some of the legal, business, and ethical issues that have started to arise — including intellectual property infringement, bias, and hallucinations, she said.
Companies large and small want to incorporate generative AI in ways that shield them from liability but allow them to be more advanced and competitive in their field, she said.
“And we are just at the forefront of looking how to put guardrails around the use of generative AI because … it can hallucinate and inject bias into its results,” Seelig said.
Bias, for example, is based on the training data used in the large language model and can occur in how AI tools review job applications and determine which applicants advance.
The Associated Press Stylebook entry on AI describes hallucination “as an issue associated with the technology that produces falsehoods or inaccurate or illogical information.”
Generative AI output also can infringe on intellectual property, as recent lawsuits are demonstrating, Seelig noted.
If someone is using generative AI output created based on information scoured from the web, ownership of ingredients of that information or output and its accuracy can raise questions. Red flags, such as watermarks on images, aren’t always apparent in the output. Human gatekeepers to filter that information or monitor the output are important, she said.
“If you’re going to use it, you should ideally … have at least someone closely analyzing what the output is,” Seelig said. “This includes spot-checking any issues with it before you use it in whichever way you’re going to use it. And it’s evaluating how you’re going to use it.”
The “how” matters, she said.
Generative AI output used for an internal educational presentation, for example, may have less risk than if it’s used for external branding or some other outward commercial use, she said.
In a PowerPoint presentation she delivers to companies on generative AI, a slide reads: “Where appropriate, use the tools to assist but not completely replace human authoring, editing, and proofing activities. Editing the output will often make it better quality and less likely to lead to third-party claims or bad press.”
Read the Fine Print
Seelig encourages reading the terms of service for generative AI platforms. While some terms may say you own outputs, if they’ve already been given to someone else, those parties also could own them, she said.
“There’s certainly a concern as to whether you’re going to have exclusive rights to any output that’s produced — and my answer is probably, ‘You should not assume you will,’” she said.
Take care with input prompts, too. A well-informed prompt will deliver better results, but you should read the terms of service on how that input data could be used.
“You don’t want to have provided a generative AI tool a lot of information and say, ‘Oh, I didn’t realize they … got to use it in this X, Y, or Z way,” Seelig said of her “read-the-directions” recommendation. “I think that it is at least very useful to go in eyes wide open.”
Another of her PowerPoint slides urges bringing the use of generative AI tools and services in-house.
With ever-changing terms of services, “It’s critical that the legal and executive teams are fully aware of all uses of generative AI tools,” the slide said, adding that accessing tools on a phone or home computer doesn’t necessarily avoid legal business issues and might come with more risks.
Companies need to figure out how they’re going to adopt use of generative AI, what their risk profile is, and how they’re going to use it safely and minimize risk, Seelig said.
“A lot of these generative AI training models … if you read the terms and services, you input data into a prompt (and) you could be giving the owner of the generative AI tool rights to that data, and others may even potentially have rights to the output; thus you may not own the output,” she said. “In addition, if you have included proprietary information in the prompt, you may have, by virtue of using their tool, granted the owner or provider of the generative AI tool the right to utilize it to train their model. And once they use that information to train their model … that could find its way in someone else’s output. Thus, you risk that any input data submitted will not be considered confidential and may be publicly available. In fact, how a company is using generative AI tools is not confidential and can be easily discovered using the generative AI tool — that can give away competitive information.”
Instead of using a public version of generative AI tool, she said, companies may consider private, corporate account options to offer enhanced protections.
“I think a lot of companies are (saying), ‘I have to use it, and how can I do it in a way that’s safe?’” Seelig said.
One message in her PowerPoint presentations: Do not include confidential or proprietary information in prompts, especially for publicly available generative AI tools.
Educate Employees
It’s important to educate employees about proper generative AI use, she said, agreeing with a suggestion that it’s like reminding employees not to click on unfamiliar attachments in emails.
“That is dead on,” she said. Employees need to be educated on potential risks in order to minimize risks associated with using generative AI, she said. Education will help them question when to ask a company decision maker on how to proceed, whether it relates to the content of prompts or how the outputs can be used. When somebody pauses to confer with a decision maker, “You have a chance to intervene and course correct; you can say, ‘No, we can’t; we need to modify the data or how we’re going to use it,’” Seelig said.
People are trying to find the right guardrails that protect the rights of creators, inventors, authors, and originators of brands and marks while also advancing the use of AI tools that can have tremendous usefulness, she said.
“I think that’s the best-case result, but how do we get there?” Seelig said. “That’s the path that we’re on right now.”
President Joe Biden in October issued an almost 20,000-word executive order on the safe, secure, and trustworthy development and use of AI to create some of those guardrails. According to the Congressional Research Service, the order “directs over 50 federal entities to engage in more than 100 specific actions to implement the guidance set forth across eight overarching policy areas.” Those are safety and security, innovation and competition, worker support, consideration of AI bias and civil rights, consumer protection, privacy, federal use of AI, and international leadership.
AI issues can get complicated fast for businesses, but that doesn’t necessarily mean spending hours upon hours with an attorney. It could mean getting some basic guidance of what to look out for. Every company’s situation and risk profile is different, and some situations are far more complex and risky than others.
“I would hate to tell everyone just a blatant statement of, ‘If you’re concerned, you need to talk to a lawyer,’” Seelig said. “I think there needs to be a nexus with … ‘Let’s think about how you’re using generative AI tools.’ For example, if your use involves a significant commercial use or has a potential significant monetary implication, then, yeah, maybe talk to your lawyer.”