AI

Global AI Landscape in Flux as Regulations Evolve


The global artificial intelligence (AI) landscape is undergoing significant shifts as regulators grapple with the technology’s rapid advancements.

While the U.S. and Europe are considering tightening AI regulations, Argentina President Javier Milei is positioning his country as a potential haven for tech investments. Meanwhile, the U.S. legal system is treading cautiously, with federal appeals courts hesitating to adopt AI-related rules.

Various industry leaders are also urging the U.S. Food and Drug Administration (FDA) to strike a balance in its approach to AI regulation in the pharmaceutical and medical device sectors.

Regulatory Shifts May Drive AI Investment to Argentina

After six months in office, President Milei is capitalizing on global regulatory shifts to position Argentina as the world’s fourth AI hub. Milei’s economic adviser, Demian Reidel, has highlighted Argentina’s potential as a strategic destination for tech investments, given the increasing regulatory pressures in the U.S. and Europe, according to a report by the Financial Times.

Reidel, who orchestrated Milei’s recent meetings with tech giants like OpenAI, Google, Apple and Meta, said restrictive regulations in other regions are making Argentina an attractive alternative.

“Extremely restrictive” rules have “killed AI in Europe,” Reidel said. He added that discussions in the U.S., particularly in California, indicated that American lawmakers might follow a similar path, further driving companies to seek more favorable environments.

In May, Milei and Reidel held private meetings in California with key industry figures, including OpenAI’s Sam Altman and Apple’s Tim Cook. They also hosted a summit with AI investors and thinkers, such as venture capitalist Marc Andreessen and sociologist Larry Diamond. Additionally, Milei has met with Tesla CEO Elon Musk twice.

Court Case? Better Bring a Human

In a move that could have set a digital precedent, the 5th U.S. Circuit Court of Appeals in New Orleans decided to keep its courtrooms strictly human for now. The court opted not to adopt what would have been the nation’s first rule regulating the use of generative AI by lawyers, Reuters reported Tuesday (June 11).

The proposed rule, introduced last November, sought to mandate that attorneys who used AI-generated filings — courtesy of tools like OpenAI’s ChatGPT — would certify that the documents had been thoroughly reviewed for accuracy. Missteps in compliance could have led to sanctions or the striking of the errant documents from court records.

The court’s decision came after an influx of public commentary, mostly from skeptical lawyers. The legal community voiced concerns over AI’s reliability, citing incidents where AI “hallucinations” resulted in fictitious case citations.

Had the 5th Circuit moved forward, it would have been the only court among the 13 federal appeals courts with such a rule. Other federal appeals courts are also toying with the idea of AI regulations, echoing the 5th Circuit’s concerns.

Across the pond, a recent survey by Thomson Reuters showed that U.K. lawyers are divided on AI regulation: 44% of in-house lawyers want government oversight, while 50% prefer self-regulation. Law firms echo this split, with 36% favoring regulation and 48% opting for a laissez-faire approach, leaving regulators in a bind.

Experts Urge FDA to Strike Balance in AI Regulation

Industry leaders at the RAPS Regulatory Intelligence Conference emphasized the need for a balanced approach in the FDA’s future AI regulations, advocating for flexibility and collaboration over rigid rules, Regulatory News reported Monday (June 10).

Moderated by Chris Whalley, Pfizer’s director of regulatory intelligence, the panel featured attorney Bradley Thompson of law firm Epstein, Becker & Green; Vice President of Pharma Sam Kay of AI-powered health data firm Basil Systems; Director of Global Regulatory Strategy Gopal Abbineni of pharmaceutical firm Bayer; and Head of U.S. Global Regulatory and Scientific Policy at Merck Group Elizabeth Rosenkrands Lange of science and tech firm EMD Serono/Merck. They collectively warned that overly prescriptive regulations could hinder innovation.

The panel stressed the importance of clearly defining AI goals within the pharmaceutical and medical device industries. Bayer’s use of AI was highlighted as an example of integrating AI into medical devices and regulatory intelligence. Merck’s AI tools and pilot projects were also noted, with an emphasis on the need for vendor partnerships due to current technology limitations.

The potential of AI to analyze vast amounts of data pointed to the untapped data that could streamline regulatory processes, Thompson noted.

Opinions on AI’s readiness varied among the panelists.

Some expressed skepticism about AI’s current capabilities and advised against large investments without clear objectives, noting that companies often fail within months due to poor planning. However, others were more optimistic, highlighting AI’s ability to accelerate product development while cautioning that it is just the first step and requires further refinement.

The panel concluded with a consensus that precise goals and strategic investments are crucial for leveraging AI’s full potential in the pharmaceutical and medical device sectors while effectively navigating the regulatory landscape.



Source

Related Articles

Back to top button