Generative AI

EXL: Generative AI modernisation starts with data, not AI


Generative Artificial Intelligence (gen-AI) is never out of the news headlines, equally full of hype and promise, but tough to pull off – so where should a business start? Organisations embarking upon gen-AI initiatives often focus on the tools, the implementation process and the (hoped-for) intelligence factor, but in the chicken-and-egg world of AI, shouldn’t they start with data modernisation before they approach gen-AI modernisation? One man who thinks so is Kshitij Jain, EMEA practice head and global chief strategy officer for analytics at EXL.

The problem (or at least one of the problems) with gen-AI development is that projects in this realm often start with a focus on the enterprise application functions that firms want to automate, enhance and extend. But, as we know, gen-AI doesn’t get out of the parking lot unless it is fed by a rich data stream emanating from Large Language Models (LLMs) and additional sources of proprietary data, including extensions to Retrieval Augmented Generation (RAG) services as well. 

Speaking from experiences drawn from working with its own customer base, the EXL team say that many businesses who have launched gen-AI pilots have started to find that getting access to the data they need to power these initiatives is not as easy, or affordable, as it sounds. Nearly three-quarters (74%) of respondents to EXL’s recent Enterprise AI Study said that data silos have been the primary barrier to enterprise-wide AI integration.

A flip in the conventional wisdom

“With data housed in different silos within the organisation, the data platform modernisation effort required to mobilise data that can feed gem-AI solutions becoming more time-consuming and costly than the solutions themselves,” said Jain. “There is a solution, however, that requires a flip in the conventional wisdom. Instead of focusing on data to power gen-AI, businesses need to focus first on using gen-AI to unlock their data. This approach, which makes use of the power of gen-AI as a ‘coding engine’, combined invaluable input from human developers to validate the results, is capable of dramatically streamlining the data management process.”

As a working example, Jain and team note a recent EXL project undertaken with a multinational bank that was trying to move away from a legacy SAS system to a Google Cloud Platform (GCP) data estate. Historically, such a migration would have been a multi-year, multi-million-dollar exercise. Using gen-AI, it was possible to take a much more efficient approach that automated the coding process and completed the modernisation project in significantly less time.

AI works best at bookends

“It is important to note, however, that this was not a push-button process that simply set a gen-AI code-writing tool to migrate underlying codebases. We should point out that generative AI is strongest at the ‘book ends’ of the platform modernisation value chain. It is good at understanding the legacy system and writing the code in the target language. However, when it comes to designing the details in the middle – like defining the target architecture and data model – there is no substitute for human expertise. The trick is getting them to work together, optimising the strengths of both,” clarified Jain.

In another EXL example, a large insurer was transitioning out of a legacy application, which was made up of tens of thousands of individual programs, each of which contained call-outs to other programs, which determine the flow of logic. Jain reminds us of a centrally fundamental point here by saying that while these nuances are clearly visible to human developers with an institutional knowledge of the business and how its codebases were assembled over time, most would be lost on a generic gen-AI model. 

“The human developers needed to isolate individual logic from the complex mesh of programs and then apply gen-AI tools to extract the business rules to test the new application,” said Jain. “This highly targeted approach, which uses gen-AI for what machines are best at and human programmers for what humans are best at, enabled the insurer to access the codebase it needed to keep data flowing through the new application into the future. More importantly, it allowed them to do so quickly, cost-effectively and reliably.”

Beyond blunt objects

Coming full circle in this discussion, Jain and the EXL team suggest that there has been a tendency in the mainstream discussion of gen-AI to focus on ‘blunt object use cases’ like code-generation, customer service co-piloting tools or drafting tools. 

“These are all important parts of making the potential of gen-AI come to life for end-users. But, as businesses now focus on the hard work of making those use cases a reality, it’s becoming clear that the challenge – and the role of gen-AI in helping to solve that challenge – is much greater than those simple use cases would suggest,” said Jain. “Today we can say that gen-AI is both the catalyst of change to modernise data platforms, and, increasingly, a critical agent of that change, helping developers slash the amount of time required to get the data they need to create truly transformative solutions.”

The key, it appears, is knowing how and where to apply these new generative intelligence functions in conjunction with human programmers and end-users to get the most out of it for the benefit of a business, for the use of users and for the greater good of green coding sustainability too.



Source

Related Articles

Back to top button