UK has real concerns about AI risks, says competition regulator | Artificial intelligence (AI)
Just six major technology companies are at the heart of the AI sector through an “interconnected web” of more than 90 investments and partnerships links, the UK’s competition regulator has warned, sparking increased concern about the anti-competitive nature of the technology.
Sarah Cardell, chief executive of the Competition and Markets Authority, said AI foundation models – general-purpose AI systems such as OpenAI’s GPT-4 and Google’s Gemini, on which consumer and business products are frequently built – were a potential “paradigm shift” for society.
Speaking in Washington, she added that the immense concentration of power they represented would give a small number of companies “the ability and incentives to shape these markets in their own interests”.
“When we started this work, we were curious. Now, with a deeper understanding and having watched developments very closely, we have real concerns,” Cardell said.
“The essential challenge we face is how to harness this immensely exciting technology for the benefit of all, while safeguarding against potential exploitation of market power and unintended consequences. We’re committed to applying the principles we have developed, and to using all legal powers at our disposal, now and in the future, to ensure that this transformational and structurally critical technology delivers on its promise.”
The six identified by the CMA are Google, Microsoft, Meta, Amazon, and Apple, as well as Nvidia, the leading supplier of chips for training and using AI. Their involvement in more than 90 partnerships and investments, the CMA said, could limit diversity and choice in the market.
The “winner takes all dynamics” of digital markets led to the dominance of a few powerful platforms, Cardell said, and she is “determined to apply the lessons of history” to prevent the same thing from happening again.
Highlighting three “interlinked” risks, Cardell said that competition in the AI sector could be harmed by companies which control critical inputs, from data to chips, restricting access to shield themselves from competition; companies using their market power to distort choice in AI services; and partnerships between key players exacerbating concentrations of market power.
The CMA first announced its plans to investigate the market in AI foundation models in May 2023. The initial review, published in September, found that people should not expect the technology to have a positive outcome in the world at large.
“We can’t take a positive future for granted,” Cardell said at the time. “There remains a real risk that the use of AI develops in a way that undermines consumer trust or is dominated by a few players who exert market power that prevents the full benefits being felt across the economy.”
The news came as AI regulators around the world prepare for a mini-summit in Korea, to build on the AI safety summit held in the UK in November 2023 and prepare for the full second session in Paris, expected later this year.