AI

World is ill-prepared for breakthroughs in AI, say experts | Artificial intelligence (AI)


The world is ill-prepared for breakthroughs in artificial intelligence, according to a group of senior experts including two “godfathers” of AI, who warn that governments have made insufficient progress in regulating the technology.

A shift by tech companies to autonomous systems could “massively amplify” AI’s impact and governments need safety regimes that trigger regulatory action if products reach certain levels of ability, said the group.

The recommendations are made by 25 experts including Geoffrey Hinton and Yoshua Bengio, two of the three “godfathers of AI” who have won the ACM Turing award – the computer science equivalent of the Nobel prize – for their work.

The intervention comes as politicians, experts and tech executives prepare to meet at a two-day summit in Seoul on Tuesday.

The academic paper, called “managing extreme AI risks amid rapid progress”, recommends government safety frameworks that introduce tougher requirements if the technology advances rapidly.

It also calls for increased funding for newly established bodies such as the UK and US AI safety institutes; forcing tech firms to carry out more rigorous risk-checking; and restricting the use of autonomous AI systems in key societal roles.

“Society’s response, despite promising first steps, is incommensurate with the possibility of rapid, transformative progress that is expected by many experts,” according to the paper, published in the Science journal on Monday. “AI safety research is lagging. Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.”

A global AI safety summit at Bletchley Park in the UK last year brokered a voluntary testing agreement with tech firms including Google, Microsoft and Mark Zuckerberg’s Meta, while the EU has brought in an AI act and in the US a White House executive order has set new AI safety requirements.

The paper says advanced AI systems – technology that carries out tasks typically associated with intelligent beings – could help cure disease and raise living standards but also carry the threat of eroding social stability and enabling automated warfare. It warns, however, that the tech industry’s move towards developing autonomous systems poses an even greater threat.

“Companies are shifting their focus to developing generalist AI systems that can autonomously act and pursue goals. Increases in capabilities and autonomy may soon massively amplify AI’s impact, with risks that include large-scale social harms, malicious uses, and an irreversible loss of human control over autonomous AI systems,” the experts said, adding that unchecked AI advancement could lead to the “marginalisation or extinction of humanity”.

skip past newsletter promotion

The next stage in development for commercial AI is “agentic” AI, the term for systems that can act autonomously and, theoretically, carry out and complete tasks such as booking holidays.

Last week, two tech firms gave a glimpse of that future with OpenAI’s GPT-4o, which can carry out real-time voice conversations, and Google’s Project Astra, which was able to use a smartphone camera to identify locations, read and explain computer code and create alliterative sentences.

Other co-authors of the proposals include the bestselling author of Sapiens, Yuval Noah Harari, the late Daniel Kahneman, a Nobel laureate in economics, Sheila McIlraith, a professor in AI at the University of Toronto, and Dawn Song, a professor at the University of California, Berkeley. The paper published on Monday is a peer-reviewed update of initial proposals produced before the Bletchley meeting.



Source

Related Articles

Back to top button