Is an AI Bubble Inevitable?
Is the AI industry headed toward “irrational exuberance?” That’s the phrase then-Federal Reserve Board chairman Alan Greenspan used in a speech given at the American Enterprise Institute during the dot-com bubble of the 1990s.
Like the hype cycles of just about every technology preceding it, there’s a significant chance — about 30% — of a major drawback in the AI market predicts Scott Zoldi, chief analytics officer at credit scoring services firm FICO, in an email interview.
Inspired by the hype surrounding GenAI, many organizations are exploring potential AI uses. This often occurs without understanding its core limitations, “or by trying to apply Band-Aids to not-ready-for-prime-time applications of AI,” Zoldi says. He estimates that less than 10% of organizations can actually operationalize AI to enable meaningful execution.
Adding further pressure, Zoldi notes, is increasing AI regulation. “These AI regulations specify strong responsibility and transparency requirements for AI applications, which GenAI is unable to meet,” he says. “AI regulation will exert further pressure on companies to pull back.”
Pedro Amorim, an assistant professor of industrial engineering at Portugal’s University of Porto, specializing in AI, data and analytics, says he’s almost certain that a drawback will, at some point, be inevitable. “Factors such as the astronomical valuation of companies like NVIDIA, and the prevalence of decisions made by boards in every corner of world on investments in AI without any comprehensive understanding or rationality, suggest that the AI market is clearly hyped,” he notes via email.
Jen Clark, director of advisory services at Eisner Advisory Group, is somewhat less pessimistic. “We’re more likely to witness a dampening in excitement and investment,” she says, in an email interview. Clark agrees that the biggest threat to the AI market is hype. “Some of the excitement is greater than the current capabilities and ability to produce quality results.”
Hedging Bets
Forward-looking enterprise AI adopters are already hedging their bets by ensuring they have interpretable AI and traditional analytics on hand while they explore newer AI technologies with appropriate caution, Zoldi says. He notes that many financial services organizations have already pulled back from using GenAI, both internally and for customer-facing applications. “The fact that ChatGPT, for example, doesn’t give the same answer twice is a big roadblock for banks, which operate on the principle of consistency.”
Clark believes that the enterprises most likely to be affected by a bubble burst will be early adopters of vertically-optimized AI offerings that aren’t backed by the major players, as well as enterprises that invested in their own LLM or vertically-optimized solutions. “Many of the newly available tools and solutions may not have the longevity or the staying power if there is a market drawback or collapse,” she cautions. “Larger enterprises will have the investment and organizational flexibility to adjust, but mid-size and small firms will struggle if the market shifts.”
Further Repercussions
In the event of a market drawback, AI customers may revert to less sophisticated approaches instead of reevaluating their AI strategies, Amorim warns. “This could result in a setback for businesses that have invested heavily in AI, since they may be less inclined to explore its full potential or adapt to changing market dynamics.”
Just as the dot-com failure didn’t permanently destroy the web, an AI industry collapse won’t mark the end of AI. Zoldi believes there will eventually be a return to normal. “Companies that had a mature, responsible AI practice will come back to investing in continuing that journey,” he notes. “They will establish corporate standards for building safe, trustworthy, responsible AI models that focus on the tenets of robust AI, interpretable AI, ethical AI, and auditable AI.”
Final Impact
To prevent a major pull-back, Zoldi says the AI industry must go beyond aspirational and boastful claims to having honest discussions about the technology’s risks. “Market players must focus on developing a responsible AI program, and boost responsible AI practices that have atrophied during the GenAI hype cycle.”
The burst after a hype cycle usually makes decision-makers swing from excitement to frustration, Amorim says. “Unfortunately, in this case, this means we will witness unfounded mistrust in AI and a reluctance to comprehend the true power and applicability of AI.” He believes that this attitude could hinder further AI innovation and adoption, leading to missed opportunities for growth and advancement in various market sectors.
Hype vs. Fear
AI can offer an extremely powerful, long-term strategic advantage when approached thoughtfully, but AI experts and business leaders will first need to strike a better balance between hype and fear, Clark says. “The more business and technology leaders work together to tackle the problems that currently make AI hard to implement — data collection and readiness, privacy, and AI governance — the more likely we are to see better collective outcomes,” she notes. “Efforts leveraging AI should be approached as a strategic investment and long-term toolkit, rather than a quick fix.”