Generative AI

Law prof predicts generative AI will die at the hands of watchdogs • The Register


Video Generative AI is destined to drown in a tsunami of regulation, argues Santa Clara University law professor Eric Goldman.

For Amazon, Google, Meta, Microsoft, and other tech titans that have bet heavily on machine-generated content, this is a dire forecast, though perhaps not so bad as it is for smaller companies eyeing chatbots and automated content creation.

It’s a flood of regulations and red tape the makers of generative AI models are trying to redirect through initiatives like the recently formed industry consortium focused on AI safety. Participating firms aim to discourage or prevent generative AI from being used to create child sexual abuse images – because failure to do so guarantees legislative intervention and costs.

Goldman outlined the coming regulatory wave last week in a presentation you can see below at Marquette University School of Law in Milwaukie, Wisconsin, and in an accompanying paper titled, “Generative AI is Doomed.”

Generative AI refers to machine learning models trained on text, audio, and images that produce text, audio, or images in response to a descriptive prompt, like GPT-4, Gemini 1.5, and Claude 3, Midjourney, DALL-E, LLaMA 3, and so on. These models are trained on massive amounts of other people’s content, often without consent or authorization.

There have been numerous copyright lawsuits over alleged infringement by the makers of generative AI models, many of which are still pending and could limit the viability of generative AI.

“I didn’t fully address the indexing copyright litigation in my talk, but copyright law remains a major potential barrier to the success of Generative AI,” Goldman told The Register.

“If copyright owners have a viable claim against Generative AI indexing, that would create an unmanageable rights thicket with millions of rights owners. Licensing schemes and statutorily-created rights clearinghouses could partially mitigate the issue, but only by dramatically increasing industry costs (exacerbating the Sport of Kings problem). Further, to avoid those costs, Generative AI model-makers might attempt countermoves that reduce their models’ functionality.”

We should note the Sport of Kings is a term that has been applied not just to polo but also to another notoriously costly pastime, patent litigation, which is an equally apt reference in this context.

Speaking of LLM disasters

Microsoft last week introduced “WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent.”

The open source model family was touted for its performance, but evidently was released without adequate safety testing. So the Windows giant withdrew the model, or tried anyway. The model had already been downloaded many times and thus can still be found in the wild. Enjoy or not, while supplies last.

But infringement risk is not the primary focus of Goldman’s concerns. He worries that the current animus against Big Tech and the accompanying regulatory environment has become too hostile for generative AI to thrive. In his paper, he hearkens back to the 1990s, when the internet reached a mainstream audience and the word “tsunami” was used in a more benign sense to evoke the social impact of emerging digital technology.

“It might be impossible to imagine today, but 1990s regulators often took a deferential and generally hands-off approach to the new technology,” wrote Goldman. “This stance was fueled by prevailing concerns that overly aggressive regulatory responses could distort or harm the emergence of this important innovation.”

What Goldman would like to see are laws like Section 230 of the Digital Millennium Copyright Act and the Internet Tax Freedom Act that have allowed the internet and businesses to grow and prosper, while providing a flexible, balanced structure.

That’s not how today’s lawmakers have addressed concerns about generative AI, as he sees it.

“The regulation will come as a tsunami,” he wrote, citing figures from the Business Software Association to the effect that state legislatures saw more than 400 AI-related bills introduced during the first 38 days of 2024, a sixfold increase.

“Not all of those bills will pass, but some already have and more are coming,” he observed. “Regulators are ‘flooding the zone’ of AI regulation now, and each new bill threatens generative AI’s innovation arc.”

There are various possible reasons why the optimism of the 1990s has faded, Goldman posits. First, there’s the lack of public awareness of the internet when it first emerged. Relatively few depictions existed in science fiction at the time, particularly not dystopian ones. That’s not the case for AI, which for decades has been featured in books and films as a malevolent force.

Then there’s the general tenor of the times. In the 1990s, techno utopianism and cheerleading accompanied the rise of the internet and the spread of communications technology. Today, there’s a lot more skepticism, what Goldman calls the “techlash.”

Given images of drones dropping grenades on the battlefield, robocar collisions, warehouse robots stealing jobs, mobile device-based tracking, algorithmic labor oversight, and the extreme wealth of tech billionaires who insist on dominating public discussion, that’s perhaps not surprising.

It is an incumbent’s effort to hinder its competitors. Many regulators will happily support these requests, even when they are being played

Third, Goldman cites the political polarization in the world today, and warns that partisan use of generative AI represents an existential threat to the technology.

Fourth, he points to the difference between incumbents then and now. In the 1990s, he opines, the telcos were the dominant players and the mood was strongly anti-regulation. Today, Big Tech is pouring money into generative AI, creating financial barriers to entry, and is trying to shape the regulatory landscape for a competitive advantage.

“OpenAI has openly called for the increased regulation of generative AI,” wrote Goldman. “This move doesn’t prove that such regulations are wise or in the public interest. More likely, it is an incumbent’s effort to hinder its competitors. Many regulators will happily support these requests, even when they are being played.”

He adds that these large tech firms look likely to embrace licensing fees as a way to mitigate legal risks, which in turn will raise costs and thus limit competition.

Goldman predicts regulators will make their presence known in all aspects of generative AI, with few limitations imposed by existing US laws like Section 230 or the First Amendment.

“The regulatory frenzy will have a shocking impact that most of us have rarely seen, especially when it comes to content production: a flood of regulation that will dramatically reshape the generative AI industry – if the industry survives at all,” he concludes. ®



Source

Related Articles

Back to top button