Generative AI

Ivan Ostojic: Looking forward with AI


Ivan Ostojic is Chief Business Officer at Infobip, the omnichannel marketing platform focused on communications, conversations, chat and messaging. When we spoke, he was driving to an airport in Croatia, but he was still able to address the “trough of disillusionment” that might be just around the corner for some — not all — uses of generative AI. (Interview edited for length and clarity.)

Q: It seems every manager has been given a mandate to bring genAI into their technology and processes. But I know you think they’re trying to implement some over-complicated solutions. Is that what is going to tip us into the trough of disillusionment?

A: Yes and no. There will be some disillusionment for people who thought this is a panacea, where you don’t need any data structure because this is an intelligent technology. There is a lot of marketing hype that is creating that perception. But when you see what serious enterprises are doing, they never went big on implementing this technology because they were worried about the hallucinations and so forth.

We are seeing very cautious adoption, where people are mainly trying to do use cases that are more internal — like knowledge summaries, quick ways for employees to find answers. Most of them are cautious about exposing this technology to customers because of the cases that came into the press like the Air Canada case.

Q: There are internal problems as well. A surprising number of businesses have had sensitive corporate data exposed on ChatGPT.

A: You’re right. People were using it to make themselves more productive without following proper corporate guidance. The general ChatGPT without guardrails: The data is exchangable. However, there are now implementations, in particular working with Microsoft, that actually secure the data and limits its spread to other systems.

Q: What about these over-complicated implementations?

A: I think some people thought this technology could completely replace human agents. They’re going too much overboard just because of the hype. For example, if you want to book an appointment, it’s much easier to do two or three clicks, seeing a calendar and the dates, than typing a prompt for ChatGPT. There is a risk of misunderstanding human semantics. People are trying to force it on every use case even when other types of solution are more appropriate.

We had this issue with translation of our own website. Somebody thought we could translate our website using generative AI, but actually if it makes a mistake in one percent of cases, that mistake can be tragic. I need to send the results to a translation agency to check everything. Them checking original against translation on all the pages versus them translating, it doesn’t cost much less.

Q: You have senior executives telling managers they have to use this and then the managers trying to create use cases. That’s surely the wrong way around.

A: A hundred percent. What we’re doing is choosing KPIs; for example, I might want to improve the speed of content writing for our team twofold and I think generative AI can help in specific ways (helping with the outline, with some rewording). I start with what I want as a business outcome, and then actually see how this technology can help me rather than vice versa. Of course, there can be some incubation areas where you try out ideas.

Q: What is Infobip doing in this area?

A: We are building an infrastructure for applications of generative AI. At the heart of this is the multi-bots, multi-large language model technology. We have intent-based bots, rule-based bots and generative AI-based bots, or assistants. We are training different assistants for different use cases, for example FAQ assistant, general knowledge assistant, customer service assistant and so forth.

We also have something we call “orchestrator.” If you call a call center and you have a technical question it can send you to a technical person, if you want to buy something it will send you to a sales person. Orchestrator understands your intent and routes you to the right place. It also does sentiment analysis (it can tell if you’re getting angry and move the conversation to a human); it does translations; it also does quality control to remove hallucinations.

Q: Some people believe that AI is going to deliver a fully automated enterprise. What you’re saying is, if anyone thinks they can do that today or tomorrow, then they are bound to be disillusioned?

A: The technology at the moment is not ready to do that, although it might go in that direction. a fully automated enterprise? We are far away from that.



Dig deeper: A blueprint for the new automation mindset



Source

Related Articles

Back to top button