What is Artificial General Intelligence (AGI), and why are people worried about it? | Explained News
In a recent interview, Sam Altman, CEO of OpenAI, expressed his commitment to invest billions of dollars towards the development of Artificial General Intelligence (AGI). But even as Altman continues to champion what is considered to be the pinnacle of AI development, many in the global tech community are very apprehensive. Here is why.
AGI refers to a machine or a software that can perform any intellectual task that a human can do. This includes reasoning, common sense, abstract thinking, background knowledge, transfer learning, ability to differentiate between cause and effect, etc.
In simple words, AGI aims to emulate human cognitive abilities such that it allows it to do unfamiliar tasks, learn from new experiences, and apply its knowledge in new ways.
Humans learn through their experiences — in school, home, or elsewhere; by talking to people or observing things; by reading books, watching television, reading articles, etc. The human brain then uses the information it has gathered to make decisions (often subconscious) that solve any given problem, or come up with a new one.
With AGI, researchers aim to build a software or computer that can do all this — everything that a human computer does. Think of having a super intelligent robot friend who can understand everything you say, learn new things just the way you do, and even think of problems to find solutions.
How is AGI different from AI we already use?
The main difference between AGI and the more common form of AI, also known as narrow AI, lies in their scope and capabilities.
Narrow AI is designed to perform specific tasks such as image recognition, translation, or even playing games like chess—at which it can outdo humans, but it remains limited to its set parameters. On the other hand, AGI envisions a broader, more generalised form of intelligence, not confined to any particular task (like humans).
Since then, AI models have gotten progressively better and more sophisticated, as billions of dollars have been pumped in to fuel research. The creation of AGI is like the final frontier in this development.
Is this a new idea?
No. The idea of AGI first emerged in the 20th century with a paper written by Alan Turing, widely considered to be the father of theoretical computer science and artificial intelligence.
In ‘Computing Machinery and Intelligence’ (1950), he introduced what is now known as the Turing test, a benchmark for machine intelligence. Simply put, if a machine can engage in a conversation with a human without being detected as a machine, according to the Turing test, it has demonstrated human intelligence.
When Turing wrote this influential paper, humans were nowhere close to developing artificial intelligence — even computers were in their nascency. Yet, his work led to wide-ranging discussions about the possibility of such machines, as well as their potential benefits and risks.
How can AGI help humanity?
In theory, AGI has innumerable positive implications. For instance, in healthcare, it can redefine diagnostics, treatment planning, and personalised medicine by integrating and analysing vast datasets, far beyond the capabilities of humans.
In finance and business, AGI could automate various processes and enhance the overall decision-making, offering real-time analytics and market predictions with accuracy.
I
When it comes to education, AGI could transform adaptive learning systems that work towards the unique needs of students. This could potentially democratise access to personalised education worldwide.
OpenAI’s Sam Altman in an interview with The Wall Street Journal said that AGI will lead to a “lot of productivity and economic value”, and will be “transformative”, promising unprecedented problem-solving capabilities and creative expression.
What then drives the skepticism regarding AGI?
Despite the promise AGI holds, it continues to fuel widespread apprehensions, due to a number of reasons. For instance, the humongous amount of computational power required to develop AGI systems raises concerns about its impact on the environment, both due to the energy consumption and generation of e-waste.
AGI could also lead to a significant loss of employment, and widespread socio-economic disparity, where power would be concentrated in the hands of those who control the AGI. It could introduce new security vulnerabilities, the kind we have not even thought about yet, and its development could outrun the ability of governments and international bodies to come up with suitable regulations. And if humans were to become dependent on AGI, it might even lead to the loss of basic human skills and capabilities.
But the most serious fear regarding AGI is that its abilities can outpace human beings’, making its actions difficult to understand and predict. This might even lead to a situation where it becomes ‘too’ independent, so much so that humans simply lose control. And like in many sci-fi movies, this might lead to a point where AGI takes actions against human well-being.
In a 2014 interview to the BBC, the late professor Stephen Hawking said, “The development of full artificial intelligence could spell the end of the human race.”
Similarly, AI pioneers Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, collectively known as the Godfathers of AI, have often warned about the catastrophic outcomes of creating AGI, with Hinton even comparing AGI’s dangers to that posed by nuclear weapons.
Today, most thinkers in the field advocate for stringent regulations to ensure that the development of AGI is in line with human values and safety standards.