AI

How to define artificial general intelligence


The idea of machines outsmarting humans has long been the subject of science fiction. Rapid improvements in artificial-intelligence (AI) programs over the past decade have led some experts to conclude that science fiction could soon become fact. On March 19th Jensen Huang, the chief executive of Nvidia, the world’s biggest manufacturer of computer chips and its third most valuable publicly traded company, said he believed today’s models could advance to the point of so-called artificial general intelligence (AGI) within five years. What exactly is AGI—and how can we judge when it has arrived?

Mr Huang’s words should be taken with a pinch of salt: Nvidia’s profits have soared because of the growing demand for its high-tech chips, which are used to train AI models. Promoting AI is thus good for business. But Mr Huang did set out a clear definition of what he believes would constitute AGI: a program that can do 8% better than most people at certain tests, such as bar exams for lawyers or logic quizzes.

This proposal is the latest in a long line of definitions. In the 1950s Alan Turing, a British mathematician, said that talking to a model that had achieved AGI would be indistinguishable from talking to a human. Arguably the most advanced large language models already pass the Turing test. But in recent years tech leaders have moved the goalposts by suggesting a host of new definitions. Mustafa Suleyman, co-founder of DeepMind, an AI-research firm, and chief executive of a newly established AI division within Microsoft, believes that what he calls “artificial capable intelligence”—a “modern Turing test”—will have been reached when a model is given $100,000 and turns it into $1m without instruction. (Mr Suleyman is a board member of The Economist’s parent company.) Steve Wozniak, a co-founder of Apple, has a more prosaic vision of AGI: a machine that can enter an average home and make a cup of coffee.

Some researchers reject the concept of AGI altogether. Mike Cook, of King’s College London, says the term has no scientific basis and means different things to different people. Few definitions of AGI attract consensus, admits Harry Law, of the University of Cambridge, but most are based on the idea of a model that can outperform humans at most tasks—whether making coffee or making millions. In January researchers at DeepMind proposed six levels of AGI, ranked by the proportion of skilled adults that a model can outperform: they say the technology has reached only the lowest level, with AI tools equal to or slightly better than an unskilled human.

The question of what happens when we reach AGI obsesses some researchers. Eliezer Yudkowsky, a computer scientist who has been fretting about AI for 20 years, worries that by the time people recognise that models have become sentient, it will be too late to stop them and humans will become enslaved. But few researchers share his views. Most believe that AI is simply following human inputs, often poorly.

There may be no consensus about what constitutes AGI among academics or businessmen—but a definition could soon be agreed on in court. As part of a lawsuit lodged in February against OpenAI, a company he co-founded, Elon Musk is asking a court in California to decide whether the firm’s GPT-4 model shows signs of AGI. If it does, Mr Musk claims, OpenAI has gone against its founding principle that it will license only pre-AGI technology. The company denies that it has done so. Through his lawyers, Mr Musk is seeking a jury trial. Should his wish be granted, a handful of non-experts could decide a question that has vexed AI experts for decades.

Editor’s note: This piece has been updated to clarify Mustafa Suleyman’s concept

© 2024, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com



Source

Related Articles

Back to top button