Generative AI

Partnership On AI CEO On The Best Way To Bring AI To Business


The Partnership on AI is a nonprofit organization dedicated to ensuring AI developments advance positive outcomes. CEO Rebecca Finlay has a wide perspective on how businesses are bringing AI into their operations, what they plan to do with it and what’s likely to happen next. I talked to her about business AI implementation and how to get the benefits from this high tech platform.

This conversation has been edited for continuity, brevity and clarity. An excerpt appeared in Sunday’s Forbes CEO newsletter.

It’s been roughly a year and a half since ChatGPT made AI something everybody is talking about all of the time. Where are CEOs now, in terms of their conversation about AI and understanding of its power?

Finlay: CEOs are trying to determine if it sounds as good as it seems to be. With AI and the way in which AI is developed, and certainly we’ve seen this with ChatGPT, is that it has the potential to be truly transformative. We see in survey after survey with CEOs that they expect that there will be significant change, both within their companies but within the sectors they operate in, over the next five years with regard to both AI and generative AI technologies in their companies. They’re in a moment where they are trying to assess. What are the benefits? What is the return on the investment of this technology in terms of our business goals: better products, better services, better relationship with our clients and customers? And what are the risks, and that’s all the way from privacy concerns around the use of data that trains these models through to the accuracy of some of these generative AI models, through to other impacts both on their workforce potentially with the use of synthetic media or other forms of malicious acts.

I see CEOs really attending to the reality that most companies these days are AI companies in some way, shape or form, and really trying to understand: Okay, what are the problems I need to solve where I have both the data and an AI system, and the inferences that come out of it are actually going to be useful for driving my business forward. And we’re seeing that in all sorts of different applications.

In general, how are companies doing with identifying where AI can work for them? Are they honing in on the best places to use it? Or is there more of an interest in what looks interesting, but might not be the best place to start?

What I’m hearing is the desire to start iterating and piloting and testing some different systems on some different problems with some different datasets in order to derive some different benefits. And this is certainly what I recommend. I think if a company is looking at an AI system as a driver in the first and foremost of efficiency, that needs to be weighed together with all of the potential risks that come therein, and particularly with regard to deploying it into their workforce. I urge two things: One, that you have in place a structure within the company that allows for you to be thinking about the cross-departmental impact of AI. Maybe like a chief data analytics officer or chief AI officer. A council, a group that is coming together that definitely includes IT, but also includes people from your legal teams and otherwise, that are really trying to explore how do we understand and test out this technology in order to be able to see how we can benefit from it. And then, what are the tools we need that will actually help us to both be innovative and responsible at the same time?

Ultimately, that all comes down to people. It all comes down to the capacity to have the right people on your leadership team, or giving you the honest assessment of the possibility. It also means a good relationship with your board of directors, so that you’re making sure that they have some visibility from a risk management perspective into how AI is being deployed in your company. And it also means having really good disclosure requirements with the public and with your customers and clients as well, in terms of how their data may be used or how you might be using systems. That also stretches to your partners, both your suppliers and those partners that are further down the value chain, understanding what you are receiving from them that may have been built with different data, and the efficiency, capabilities and responsibilities of those models as well.

What do you see as an ideal way a business can assess the way to go about using AI?

For me, an ideal assessment is one that doesn’t prefer innovation and speed over responsibility. I say that the reason why we can drive fast is because we have brakes in our cars. The reason why we can drive fast and be safe is because we have seatbelts, right? Those are two innovations that are safety innovations, that actually drive our capacity to move more quickly and more effectively and more safely down the road. And so I encourage companies to get started because there are all sorts of ways in which AI could be helping them to serve their customers better. I also say if you’re going to get started, make sure that you are considering the responsibility requirements right from the very start. Then you’re building, I think, a much more effective system that’s going to drive much more effective product down the road as well.

What kind of people in a company should have a voice in AI decisions to make sure that layer of responsibility is there?

Making sure that you have a council of advisors who [meet] regularly. I think the council can do a couple of things. One thing: It can be monitoring developments in the broader sector and perform a learning and educational role. It can also be providing space to do some of this piloting. And then once systems are deployed, it can also be doing some work to monitor post deployment, right? The thing about these AI systems is that they can be trained in one set of data, but once they’re out in the wild, they need to be monitored as well to make sure that they’re continuing to do what you want them to do. That, I think, is really important.

The other piece that is really essential is that if you are deploying automated systems into your workforce. Let’s say you have a call center and you’re going to use a chatbot in that call center, it is really important that the workers who are in that call center be part of that technology deployment. First, because they know what they need. They know where the gaps are—where a system that’s giving them, for example, the ability to answer questions much more quickly or in different languages, they’ll know what they need from that. And then secondly, what all of the research today has shown about these systems is that they actually can drive worker well-being and better productivity if they are deployed in partnership with the workforce that’s actually using them as well.

When CEOs are looking at AI right now, do they seem to have a handle on kind of everything it takes? Are they aware of the responsibility, safety and security issues, whether they need to hire more people, if more training is needed for existing employees?

It’s a question of, do you have the right people with the right skill sets? It used to be, and it continues to be, very important that you have very strong IT expertise, computer science and computer engineering expertise in house. But what we’re seeing with some of these generative AI models is their capacity to code, and their capacity to be able to pick up some of that coding work that some of your software engineers may have been doing previously, and that might create space for them to do some of the innovations that we’re talking about around AI.

But also, I think it is understanding from your privacy professionals and your legal advisors internally, as well as your product and marketing professionals. What are they each hearing in their own professional associations? We’re seeing a lot of interest from lawyers’ associations to better understand what the legal issues are. And one of the key pieces there, of course, for both your lawyers and your privacy officials and others, is the evolving policy landscape. We now have the EU AI Act, so if you’re a company that is in the EU market, understanding what the implications are for you therein is really crucially important. And then we have a number of states in the U.S. who are developing different state-level privacy bills. So really, you need to have a community that’s both attending to what’s happening in-house, but also really understanding from their professional perspective what some of the emerging trends and opportunities might be.

And then, of course, you have organizations like the Partnership on AI, which is really a learning community coming together to explore some of those topics in real time. One of the reasons why many of the companies become our partners is they not only believe in our mission: That we want to develop responsible and safe AI that benefits the many, not a few. But also, they’re learning a lot from being right there, developing some of these risk management frameworks, transparency requirements, worker integration guidelines. These are all things that we’re working on right now. That’s probably one of the most interesting things about AI: It’s still very much a young field, and there aren’t a lot of clear frameworks to go to to better understand how to manage risk and how to do that in real time to attend to innovation.

When CEOs are looking at AI and talking to their finance team, where is the priority? Are they looking for a quick return on investment? Is it eventually saving money? Or is it all about keeping up with other companies?

CEOs are dealing with these demands across their business sectors, no matter whether it’s AI or otherwise. There’s always this tension between speed, pace of change, need to innovate for efficiency and productivity; together with what is my competition doing, and how do I make sure that I’m in a competitive place to move forward. And then the other piece of it is the risk; the unknown pieces of this model. I think that CEOs are in this moment, weighing all of those. Getting the good team in place internally to give them some advice, trying to engage as much as they can with experts outside of their companies who can help to give them some of that advice as well. And I think, most importantly with the board, is coming to a clear understanding about what level of transparency the board needs in order to be assured that the AI decisions that are taking place are being done transparently and responsibly.

One of the interesting developments that I’ve noticed, some [publicly traded] companies have started within those [annual financial] filings that specifically speaks to what are the risks: We’re using AI. What are the risks that we want to make sure that the public is aware of? That, to me, is a really interesting form of governance that really shows how important it is for companies to be attending to the innovation components of AI, and doing it in as transparent a way as possible.

At the beginning of the interview, we started off by saying it’s been about a year and a half since ChatGPT started the big movement toward AI. Where do you think businesses will be with AI a year and a half from now?

In a year and a half, I expect that a number of companies will be moving from the piloting and iterating and exploration stage into the deployment stage. Of course, there are many companies that are already using AI machine learning in all sorts of very low-risk ways across their portfolio of work.

Every company is an AI company, is a data company. We’re going to start to see it really integrated across a number of different areas, and particularly when it comes to generative AI. First of all, the generative AI models are progressing. They are getting better. They are getting more accurate. There’s been this loosely used term of hallucinating, but the reality is that some of these models are giving you inaccurate information. So there’s all sorts of technology development underway to think about how that could be improved.

That will be very interesting because we’ll start to see generative AI applied in some really interesting ways. We could see it applied, for example in a marketing department, where you need to be doing some creative thinking about how you are positioning your messaging, and how you are developing your materials. You could see that there may be ways in which it could really drive a whole level of product development. Or we could see it in R&D that could allow companies to do some modeling and scenario planning using synthetic models, potentially like a manufacturing center would develop them.

I think in 18 months time, we’re just going to hear that that has moved forward in a way that we haven’t yet seen to date, and I think that’s what’s going to be really exciting. My recommendation is if you’re a CEO and in 18 months you really want to be thinking about having some applications of AI and generative AI that are really driving value for the organization, get started. Get started by putting the people in place, by putting the resources and tools in place. We like to say document everything. It helps you internally to know better, and will help you provide some oversight externally to your board about the work you’re doing. And get your people engaged in the conversation. So if that means upskilling your workforce, or creating ways for them to get engaged and creatively rolling it all together.

Any final thoughts?

As a species, we’re not particularly good at predicting the future. So I strongly encourage business leaders to keep an open mind about this technology and how it can truly advance and transform the work that they’re doing, through the lens of wanting to serve their customers and clients better. Each company has deep knowledge of what its customers need. It may be that generative AI isn’t the right thing to meet their needs. It may be that there are other forms of AI that will help you to do things better.

It’s a really exciting and innovative and uncertain time for CEOs to be thinking: Okay, how do I, and how do[es] my company, and how does my board want to be really ensuring that we make the decision that’s right for us, that’s going to drive the benefits that we know we need moving forward?



Source

Related Articles

Back to top button