AI

Leading AI Firms Pledge ‘Responsible’ Tech Development


Some of the world’s biggest tech companies pledged to work together to guard against the dangers of artificial intelligence as they wrapped up a two-day AI summit, also attended by multiple governments, in Seoul.

Sector leaders from South Korea’s Samsung Electronics to Google promised at the event, co-hosted with Britain, to “minimise risks” and develop new AI models responsibly, even as they push to move the cutting-edge field forward.

The fresh commitment, codified in a so-called Seoul AI Business Pledge Wednesday plus a new round of safety commitments announced the previous day, build on the consensus reached at the inaugural global AI safety summit at Bletchley Park in Britain last year.

Tuesday’s commitment saw companies including OpenAI and Google DeepMind promise to share how they assess the risks of their technology — including those “deemed intolerable” and how they will ensure such thresholds are not crossed.

Advertisement – Scroll to Continue


But experts warned it was hard for regulators to understand and manage AI when the sector was developing so rapidly.

“I think that’s a really, really big problem,” said Markus Anderljung, head of policy at the Centre for the Governance of AI, a non-profit research body based in Oxford, Britain.

“Dealing with AI, I expect to be one of the biggest challenges that governments all across the world will have over the next couple of decades.”

Advertisement – Scroll to Continue


“The world will need to have some kind of joint understanding of what are the risks from these sort of most advanced general models,” he said.

Michelle Donelan, UK Secretary of State for Science, Innovation and Technology, said in Seoul on Wednesday that “as the pace of AI development accelerates, we must match that speed… if we are to grip the risks.”

She said there would be more opportunities at the next AI summit in France to “push the boundaries” in terms of testing and evaluating new technology.

Advertisement – Scroll to Continue


“Simultaneously, we must turn our attention to risk mitigation outside these models, ensuring that society as a whole becomes resilient to the risks posed by AI,” Donelan said.

The stratospheric success of ChatGPT soon after its 2022 release sparked a gold rush in generative AI, with tech firms around the world pouring billions of dollars into developing their own models.

Such AI models can generate text, photos, audio and even video from simple prompts and its proponents have heralded them as breakthroughs that will improve lives and businesses around the world.

Advertisement – Scroll to Continue


However, critics, rights activists and governments have warned that they can be misused in a wide variety of ways, including the manipulation of voters through fake news stories or “deepfake” pictures and videos of politicians.

Many have called for international standards to govern the development and use of AI.

“I think there’s increased realisation that we need global cooperation to really think about the issues and harms of artificial intelligence. AI doesn’t know borders,” said Rumman Chowdhury, an AI ethics expert who leads Humane Intelligence, an independent non-profit that evaluates and assesses AI models.

Chowdhury told AFP that it is not just the “runaway AI” of science fiction nightmares that is a huge concern, but issues such as rampant inequality in the sector.

“All AI is just built, developed and the profits reaped (by) very, very few people and organisations,” she told AFP on the sidelines of the Seoul summit.

People in developing nations such as India “are often the staff that does the clean-up. They’re the data annotators, they’re the content moderators. They’re scrubbing the ground so that everybody else can walk on pristine territory”.



Source

Related Articles

Back to top button