Transparency Coalition report calls for updating privacy laws to counter harms driven by Generative AI systems
“It’s our view that today’s most powerful AI systems process and expose personal data in ways that do not comply with U.S. privacy laws,” said TCAI Co-Founder Jai Jaisimha. “Our current regulatory systems are not fully equipped to deal with the harms this causes. We need to rethink how we think about privacy harm in the AI age.”
For the past 60 years the American legal understanding of privacy has been tied to the work of legal scholar William Prosser, who described four types of personal harms resulting from violations of a person’s right to privacy. Those are known as intrusion, public disclosure, false light, and appropriation harms. The paper suggests updating and expanding those types to include five more, first proposed in 2022 by Danielle Keats Citron and Daniel J. Solove, that have arisen in the internet and AI era. Those are physical, economic, reputational, psychological, and autonomy harms.
Using that updated understanding of privacy harms, TCAI urges regulators and enforcement agencies to step up enforcement of existing privacy laws as they apply to artificial intelligence development and deployment.
In addition, the paper argues policymakers must create an appropriate framework for direct state and federal oversight of the AI industry. This should include standardized and comprehensive documentation of the training data that enables regulatory review of new and existing AI models.
“Privacy Harms in the AI Age” urges industry and government to adopt a standard data card, which would then be a required component of any AI model or system. Such a data card—also known as a data declaration—would contain information such as the source and owner of the datasets on which the AI model was trained; how that data was collected; and whether the datasets contained personal information.
“Privacy Harms in the AI Age: Time for a System Upgrade” written by privacy attorney and TCAI legal advisor Leigh Wickell, can be found on the Transparency Coalition website.
About the Transparency Coalition:
The Transparency Coalition is a 501(c)3 organization working to create AI safeguards for the greater good. We believe artificial intelligence has the potential to be a powerful tool for human progress if properly trained, harnessed, and deployed. In support of that belief we advocate for systems and policies that hold AI developers and deployers accountable for the legal and ethical operation of AI systems. Learn more at transparencycoalition.ai
Media Contact
Laura Morarity
For the Transparency Coalition
[email protected]
SOURCE Transparency Coalition