Managing Extreme AI Risks Amidst Rapid Technological Development – Eurasia Review
Although researchers have warned of the extreme risks posed by rapidly developing artificial intelligence (AI) technologies, there is a lack of consensus about how to manage them.
In a Policy Forum, Yoshua Bengio and colleagues examine the risks of advancing AI technologies – from the social and economic impacts, malicious uses, and the possible loss of human control over autonomous AI systems – and recommend directions for proactive and adaptive governance to mitigate them. They call on major technology companies and public funders to invest more – at least one-third of their budgets – into assessing and mitigating risks and for global legal institutions and governments to enforce standards to prevent AI misuse.
“To steer AI toward positive outcomes and away from catastrophe, we need to reorient. There is a responsible path – if we have the wisdom to take it,” write the authors.
Technology companies worldwide are racing to develop generalist AI systems that match or exceed human abilities across many critical domains. However, alongside advanced AI capabilities come sociality-scale risks that threaten to amplify social injustice, erode social stability, enable large-scale cybercriminal activity, and facilitate automated warfare, customized mass manipulation, and pervasive surveillance.
Among these harms, researchers have also suggested the potential to lose control of autonomous AI systems, rendering human intervention ineffective. Bengio et al. argue that humanity is not on track to handle these potential risks and that, compared to the efforts of making AI systems more powerful, very few resources are being invested in ensuring the safe and ethical development and deployment of advanced AI technologies.
To address this, the authors outline urgent priorities for AI research and development and governance.