AI

Can AI ever be smarter than humans?


What’s the context?

“Artificial general intelligence” (AGI) – the benefits, the risks to security and jobs, and is it even possible?

  • Tech firms work on AI that is ‘smarter than humans’
  • Traditional AI lacks sensory perception, new ideas
  • More advanced models could add to AI’s pros and cons

LONDON –  When researcher Jan Leike quit his job at OpenAI last month, he warned the tech firm’s “safety culture and processes (had) taken a backseat” while it trained its next artificial intelligence model.

He voiced particular concern about the company’s goal to develop “artificial general intelligence”, a supercharged form of machine learning that it says would be “smarter than humans”.

Some industry experts say AGI may be achievable within 20 years, but others say it will take many decades, if at all. 

But what is AGI, how should it be regulated and what effect will it have on people and jobs?

What is AGI?

OpenAI defines AGI as a system “generally smarter than humans”. Scientists disagree on what this exactly means.

“Narrow” AI includes ChatGPT, which can perform a specific, singular task. This works by pattern matching, akin to putting together a puzzle without understanding what the pieces represent, and without the ability to count or complete logic puzzles.

“The running joke, when I used to work at Deepmind (Google’s artificial intelligence research laboratory), was AGI is whatever we don’t have yet,” Andrew Strait, associate director of the Ada Lovelace Institute, told Context.

IBM has suggested that artificial intelligence would need at least seven critical skills to reach AGI, including visual and auditory perception, making decisions with incomplete information, and creating new ideas and concepts.

What are the risks of AGI?

Narrow AI is already used in many industries, but has been responsible for many issues, like lawyers citing “hallucinated” – made up – legal precedents and recruiters using biased services to check potential employees.

AGI still lacks definition, so experts find it difficult to describe the risks that it might pose.

It is possible that AGI will be better at filtering out bias and incorrect information, but it is also possible new problems will arise.

One “very serious risk”, Strait said, was an over-reliance on the new systems, “particularly as they start to mediate more sensitive human-to-human relationships”.

AI systems also need huge amounts of data to train on and this could result in a massive expansion of surveillance infrastructure. Then there are security risks.

“If you collect (data), it’s more likely to get leaked,” Strait said.

There are also concerns over whether AI will replace human jobs.

Carl Frey, a professor of AI and work at the Oxford Internet Institute, said an AI apocalypse was unlikely and that “humans in the loop” would still be needed.

But there may be downward pressure on wages and middle-income jobs, especially with developments in advanced robotics.

“I don’t see a lot of focus on using AI to develop new products and industries in the ways that it’s often being portrayed. All applications boil down to some form of automation,” Frey told Context.

How should AGI be regulated?

As AI develops, governments must ensure there is competition in the market, as there are significant barriers to entry for new companies, Frey said.

There also needs to be a different approach to what the economy rewards, he added. It is currently in the interest of companies to focus on automation and cut labour costs, rather than create jobs.

“One of my concerns is that the more we emphasise the downsides, the more we emphasise the risks with AI, the more likely we are to get regulation, which means that we restrict entry and that we solidify the market position of incumbents,” he said.

Last month, the U.S. Department of Homeland Security announced a board comprised of the CEOs of OpenAI, Microsoft, Google, and Nvidia to advise the government on AI in critical infrastructure.

“If your goal is to minimise the risks of AI, you don’t want open source. You want a few incumbents that you can easily control, but you’re going to end up with a tech monopoly,” Frey said.

When will we get AGI?

AGI does not have a precise timeline. Jensen Huang, the chief executive of Nvidia, predicts that today’s models could advance to the point of AGI within five years.

Huang’s definition of AGI would be a program that can improve on human logic quizzes and exams by 8%.

OpenAI has indicated that a breakthrough in AI is coming soon with Q* (pronounced Q-Star), a secretive project reported in November last year.

Microsoft researchers have said that GPT-4, one of OpenAI’s generative AI models, has “sparks of AGI“. However, it does not “(come) close to being able to do anything that a human can do”, nor does it have “inner motivation and goals” – another key aspect in some definitions of AGI.

But Microsoft President Brad Smith has rejected claims of a breakthrough.

“There’s absolutely no probability that you’re going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It’s going to take years, if not many decades, but I still think the time to focus on safety is now,” he said in November.

Frey suggested there would need to be significant innovation to get to AGI, due to both limitations in hardware and the amount of training data available.

“There are real question marks around whether we can develop AI on the current path. I don’t think we can just scale up existing models (with) more compute, more data, and get to AGI.”



Source

Related Articles

Back to top button