Generative AI

The 10% Truth: Navigating the Real Value in the AI Landscape


Generative AI might be the technology buzzword of the year, but one industry leader is warning against believing all the hype.

According to Patrick Bangert, the senior vice president of data, analytics and AI at cloud solutions service provider Searce, “around 90% of the statements you hear about AI are either lies or propaganda.”

“The key is to identify the 10% that are genuine and work with those because they do exist,” said Bangert.

“But most of the use cases you see popularly on social media I would put into the entertainment category; they are fun and cool and put a smile on your face, but they are nothing more than that. In terms of the use cases that drive actual value that I can measure in dollar or kilogram terms, there are not that many.”

Small world

Bangert’s comments on the AI bubble are directed not so much at the AI projects that organizations are “essentially hiring” by working in the cloud but among the ten or so—primarily Silicon Valley-based—companies that are actually creating the AI technology.

“It’s a very small world, and we can ask if those companies are going to get the money back on their investments,” said Bangert.

“It’s very questionable. A lot of these companies are loss-making, and if you look into that Generative AI box, the investments have been heavy, and the returns have been slim,” he explained.

“We are close to Google and somewhat close to AWS, and the strong hope is that this year we would see a greater number of serious companies adopting Generative AI and traditional AI at a production level and pay significant fees, but the reality is that I would struggle to name one or two businesses which have done that.”

“Around 90% of the statements you hear about AI are either lies or propaganda.”

Looking at the small group of companies that actually have an LLM (large language model), Bangert sees industry consolidation and believes the actual number will be reduced by half.

“I think that the top players, like Amazon, Microsoft, and Google, will end up purchasing the others at some point, either full-on or outright, or the brands will continue to exist but be commercially owned,” he said.

“There are a vast number of companies out there, too, that are developing apps on the basis of these LLMs, and out of those thousands of companies, I think the majority will simply disappear.”

Bangert also warned that while many in the industry looked at advances in Gen AI as a significant “step change,” there were very few use cases “that AI can do and that are truly valuable and offer real added value.”

Improved process

This is not to say that AI is not currently useful. Bangert gave several examples of where Searce had worked with clients on AI projects, which had improved processes and saved time and money.

Large language models, he said, were particularly good at programming languages.

“So if I say in English that I want a function in Python that does this, that and the other, it will translate into Python,” said Bangert.

“There is very good evidence that this increases the capability of my development team by 50%, so I need 50% fewer hours to accomplish the same outcome. But I would say that in order to make AI valuable, the main thing is to change human processes because if you want to benefit from driving a car, you need a driver’s license.”

In another example, which Searce demonstrated in a board room environment, an AI application translated a question asked in English to the database language SQL, with the question then posted to the database, which then responded.

“The answer was either in a table of numbers or was translated using generative AI to a dashboard, and this happened instantaneously,” said Bangert.

“The board member asks a question, and immediately up comes a bar graph; there are follow-up questions with answers on the dashboard, and everybody saw this live,” he noted.

“Prior to Generative AI, an analyst would have been sent off to answer that question, and they would have taken maybe two days, they would have come back to the board member who would have had a follow-on question, and it would have taken three or four weeks for the final answer and a decision to be taken.”

Barriers and divides

While acknowledging AI’s utility, Bangert sees a barrier in the current lack of regulation, which means that “customers do not know what they will be allowed to do what they will be forbidden next year.”

He observed a divide in the status and benefits of AI, similar to the divide between Democrats and Republicans in the US, which was a polarizing opinion.

“We are split between people who think Generative AI is absolutely fantastic and people who think it’s completely untrustworthy, and the number of people in the middle is getting smaller,” said Bangert.

Asked about recent advances in GenAI, such as next-generation chatbots, he said the differences in performance were “unnoticeable.”

“I feel we have hit a plateau, and the next step change in my mind would be to somehow teach these models reasoning,” said Bangert.

“They are very good at manipulating language, but they have no idea what we are talking about; they have no knowledge of the underlying world and make errors, and sometimes these could be disastrous,” he added.

“AI researchers will just recommend we give them more data to train on, but the problem is that all these models have been trained on the internet, which means that more data is just unavailable to them.”

Looking for a breakthrough

Any step change for AI, he said, would come from a “fundamental research breakthrough on a mathematical level.”

The current transformer architecture for AI was around six years old, and there hasn’t been a major advance since then.

“This isn’t something you can engineer or fund your way out of,” said Bangert, who is himself a mathematician.

“It’ll come down to some intelligent individual acting on their own somewhere in a research lab at a university, and they’ll have the spark,” he added.

“That’s what we need to get us to this step of how we can get these networks to have logical thinking because right now, we in the research community don’t know how to do it.”

Image credit: iStockphoto/wildpixel



Source

Related Articles

Back to top button