ChatGPT Will Never Reach Human Intelligence
Meta’s chief AI scientist thinks large language models will never reach human intelligence.
Yann LeCun asserts that artificial intelligence (AI) large language models (LLMs) such as ChatGPT have a limited grasp on logic, the Financial Times (FT) reported Wednesday (May 21).
These models, LeCun told the FT, “do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot plan . . . hierarchically.”
He argued against depending on LLMs to reach human-level intelligence, as these models need the right training data to answer prompts correctly, thus making them “intrinsically unsafe.”
LeCun is instead working on a totally new cohort of AI systems that aim to power machines with human-level intelligence, though this could take 10 years to achieve.
The report notes that this is a potentially risky gamble, as many investors are hoping for quick returns on their AI investments. Meta recently saw its value shrink by almost $200 billion after CEO Mark Zuckerberg pledged to up spending and turn the tech giant into “the leading AI company in the world.”
Meanwhile, other companies are moving forward with enhanced LLMs in hopes of creating artificial general intelligence (AGI), or machines whose cognition surpasses humans.
For example, this week saw AI firm Scale raise $1 billion in a Series F funding round that valued the startup at close to $14 billion, with founder Alexandr Wang discussing the company’s AGI ambitions in the announcement.
Hours later, the French startup called “H” revealed it had raised $220 million, with CEO Charles Kantor telling Bloomberg News the company is working toward “full-AGI.”
However, some experts question AI’s ability to “think” like humans. Among them is Akli Adjaoute, who has spent 30 years in the AI field and recently authored the book “Inside AI.”
Rather than speculating about whether the technology will think and reason, he views AI’s role as an effective tool, stressing the importance of understanding AI’s roots in data and its limitations in replicating human intelligence.
“AI does not have the ability to understand the way that humans understand,” Adjaoute told PYMNTS CEO Karen Webster.
“It follows patterns. As humans, we look for patterns. For example, when I recognize the number 8, I don’t see two circles. I see one. I don’t need any extra power or cognition. That’s what AI is based on. It’s the recognition of algorithms and that’s why they’re designed for specific tasks.”