OpenAI, Microsoft, Nvidia to Face US AI Antitrust Probes
American regulators are reportedly readying antitrust investigations into three major players in the AI industry.
The Justice Department and Federal Trade Commission (FTC) have come to an agreement that lets them proceed with probes into the dominant roles OpenAI, Microsoft and NVIDIA command in the artificial intelligence (AI) sector, the New York Times reported Thursday (June 6).
This deal will see the Justice Department look into whether chipmaker Nvidia — now a $3 trillion company — has broken antitrust laws with its behavior, the report said, citing sources familiar with the matter. The FTC will take the lead in investigating OpenAI and its partner Microsoft. In fact, the FTC has already sent subpoenas to Microsoft to examine its recent deal with AI firm Inflection AI, the Wall Street Journal reported Thursday.
The NYT report notes that the three companies had until now avoided heavy oversight by the Biden administration, though the winds began to change as generative AI became more ubiquitous.
In January, the FTC launched an investigation into the partnerships between tech giants Google and Microsoft and their respective partnerships with AI startups Anthropic and OpenAI.
That inquiry, the commission said at the time, would make sure that companies that are developing and monetizing AI are not employing methods that prevent the creation of new markets and restrict healthy competition.
During a speech in April, FTC Chair Lina Khan said technology companies have tried to “dazzle” policymakers with AI’s potential, but to no avail.
“There’s no exemption from the laws prohibiting collusion, laws prohibiting price fixing, laws prohibiting monopolization, laws prohibiting fraud,” Khan said at an antitrust conference in Washington. “The FTC is going to take action.”
But, as the NYT report points out, the U.S. still lags Europe in AI regulations, with the European Union last year forming a landmark agreement on rules to curb the risks the fast-growing technology presents.
In the U.S., AI regulation is happening at the state level. For example, lawmakers in Illinois have passed a flurry of measures aimed at addressing AI concerns, including one that would expand child exploitation laws to cover AI-generated content. Another would protect people from having their voice, image or likeness duplicated by AI for commercial purposes without their permission.
And as has been covered here, California has voted to regulate AI and businesses’ personal data use.
“In this federal legislative vacuum, states are taking the lead, with California emerging as a key player,” PYMNTS wrote earlier this week.
The state is home to 32 of Forbes’ top 50 global AI companies, including OpenAI, Anthropic, Meta and Google.
“The presence of such major AI developers makes California an attractive jurisdiction for advancing responsible AI policy,” the Brookings Institution said in a repent report.