AI

AI Stocks Fall as Investors Grow More Choosy


“Just saying ‘AI’ 15 times is not going to cut it anymore.”

So said Stuart Kaiser, head of U.S. equity trading strategy at Citi, speaking to the Financial Times (FT) Wednesday (June 19) about the state of artificial intelligence (AI) investments.

As that report noted, most of the stocks that jumped amid last year’s AI hype have dropped this year, a sign that investors could be growing more selective in backing companies that claim to benefit from the rise of artificial intelligence.

According to the FT, huge rallies by firms like Nvidia — now the most valuable public company in the world — have triggered a debate about whether the U.S. stock market is being fueled by speculative hype.

“AI is still a big theme, but if you can’t demonstrate evidence, you’re getting hurt,” said Kaiser.

The report said around 60% of stocks in the S&P 500 have gone up this year, but more than half the stocks included in Citi’s “AI Winners Basket” have declined. Last year, more than 75% of companies in that group had risen.

“Investors are looking a bit more at the earnings story among ‘AI’ names,” Mona Mahajan, senior investment strategist at Edward Jones, told the FT. “The differentiator with something like a Nvidia is they have delivered on the bottom line, showing real data.”

Elsewhere on the AI front, PYMNTS wrote Tuesday (June 18) about a problem vexing businesses: AI systems that confidently offer up plausible but inaccurate information, a phenomenon often referred to as “hallucinations.”

As companies increasingly depend on AI for decision-making, the risks presented by these fabricated outputs are coming into greater focus. At the heart of the issue are large language models (LLMs), the AI systems behind much of the newest tech businesses are adopting.

“LLMs are built to predict the most likely next word,” Kelwin Fernandes, CEO of NILG.AI, a company specializing in AI solutions, told PYMNTS. “They aren’t answering based on factual reasoning or understanding but on probabilistic terms regarding the most likely sequence of words.”

This dependence on probability means that if the training data used to develop the AI is flawed or the system misunderstands a query’s intent, it can create a response that is confidently delivered but still inaccurate — a hallucination.

“While the technology is evolving rapidly, there still exists a chance a categorically wrong outcome gets presented,” Tsung-Hsien Wen, CTO at PolyAI, told PYMNTS.



Source

Related Articles

Back to top button