A UN Report on AI and human rights highlights dangers of the AI revolution—and our own power to prevent substantial harms
Hello and welcome to Eye on AI.
As the conversation around generative AI safety continues, a recent report from the UN is applying a specific lens to the risks. Published as a supplement to the UN B-Tech Project’s recent paper on generative AI, the “Taxonomy of Human Rights Risks Connected to Generative AI” explores 10 human rights that generative AI may adversely impact.
The paper says that “the most significant harms to people related to generative AI are in fact impacts on internationally agreed human rights” and lays out several examples for each of the 10 human rights it explores: Freedom from Physical and Psychological Harm; Right to Equality Before the Law and to Protection against Discrimination; Right to Privacy; Right to Own Property; Freedom of Thought, Religion, Conscience, and Opinion; Freedom of Expression and Access to Information; Right to Take Part in Public Affairs; Right to Work and to Gain a Living; Rights of the Child; and Rights to Culture, Art, and Science.
In many cases, the report adds additional nuance to issues people are already talking about, such as generative AI’s impact on creative professions and how it can be used to create harmful content, from political disinformation to nonconsensual pornography and CSAM (child sexual abuse material). Compiled all together, the list of over 50 examples of potential human rights violations creates a striking picture of what’s at stake as companies rush to develop, deploy, and commercialize AI.
The report also asserts that generative AI is both altering the current scope of existing human rights risks associated with digital technologies (including earlier forms of AI) and has unique characteristics that are giving rise to new types of human rights risks. For example, the use of generative AI for armed conflict and the potential for multiple generative AI models to be fused together into larger single-layer systems that could autonomously disseminate huge quantities of disinformation.
“Other potential risks are still emerging and in the future may represent some of the most serious threats to human rights linked to generative AI,” it reads.
One risk that stuck out to me was surrounding the Rights of the Child: “Generative AI models may affect or limit children’s cognitive or behavioral development where there is over-reliance on these models’ outputs, for example when children use these tools as a substitute for learning in educational settings. These use cases may also cause children to unknowingly adopt incorrect or biased understandings of historical events, societal trends, etc.”
The report also notes that children are especially susceptible to human rights harms linked to generative AI because they are less capable of discerning between synthetic content and genuine content, identifying inaccurate information, and understanding they’re interacting with a machine. It makes me think of how young children were given daily access to social media without virtually any transparency or research into how it might impact their development or mental well-being. As a result of social media companies’ recklessness and an almost total lack of guardrails surrounding the technology, children were harmed—an issue that came to a head earlier this year when the CEOs of Meta, Snapchat, TikTok, X, and Discord testified before Congress in a heated hearing that looked at social media’s role in child exploitation as well as its contribution to addiction, suicide, eating disorders, unrealistic beauty standards, bullying, and sexual abuse. Kids were treated as guinea pigs on Big Tech’s social media platforms, as critics and parents often say, and it’d be shameful to repeat the mistake with generative AI.
The section on the Right to Work and to Gain a Living was also interesting and increasingly relevant, exploring how Generative AI could drastically alter economics, labor markets, and daily work practices, and the disparate effects this could have on different groups. This includes everything from employers using generative AI to monitor workers to the exploitative nature of data labeling work required to create large language models and implications for workers’ rights, such as the fact that workers engaged in labor disputes with employers may be at heightened risk of being replaced with generative AI tools.
One thing that’s clear from the report, however, is the extent to which these potential human rights violations are not inevitable, but depend on our own implementation of the technology and what guardrails—or lack thereof—we put around it. Generative AI as a technology won’t on its own commit these more than 50 human rights violations, but rather powerful humans acting recklessly to prioritize profit and dominance will.
Now, here’s some more AI news.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com
This story was originally featured on Fortune.com