Generative AI

Explainer: Generative AI and Personal Data – Digital Transformation News


By Krishna Sarma

What is the law on deepfake voices?

AI-powered deepfake voices have emerged as a potent tool in political campaigns, corporate espionage as well as cyber frauds. Krishna Sarma  checks whether the existing laws in India can provide optimal remedies to victims

What makes deepfake voices dangerous?

RECENTLY, OPENAI SHOWCASED a new voice assistant for ChatGPT 4.0 —Sky — reminiscent of the AI character played by Scarlett Johansson in the 2013 film Her. Scarlett Johansson promptly sent legal letters to OpenAI seeking Sky to be taken down. In response, OpenAI stalled the use of Sky but denied imitating Johansson’s voice and claimed to have used authorised natural voice of another actress. Closer home in India, AI-created deepfakes of public figures have created national headlines. Actor Amitabh Bachchan’s AI generated voice was misused for political campaigning in the Madhya Pradesh assembly elections. Actor Rashmika Mandanna’s face was morphed onto someone else’s body in a viral video.

AI’s transformational potential is undoubtedly acknowledged. But there are significant risks, from the more alarmist theory that AI poses an existential threat to humanity to real societal dangers such as biases (computational statistical source bias as well as human, systemic biases), data privacy, infringement of  intellectual property rights, discrimination, deepfakes, disinformation, political meddling, and national security. Generative AI, more specifically deepfakes, is not inherently bad. There can be beneficial and benign uses as well — it can aid identity protection when required, recreate crime scenes, in augmented reality in fashion retail and so on.
Currently, the traditional legal framework is being applied to Generative AI related user harms — a context that was not contemplated or imagined at the time these laws were made.

What are the constitutional provisions to prevent voice cloning?

AN INDIVIDUAL HAS a fundamental right to protect her privacy under Article 21 of the Constitution of India as held in Supreme Court’s landmark Puttaswamy judgment (2017). Further, in the Ritesh Sinha case (2019), the Supreme Court ruled that voice samples, while protected under the right to privacy, can be legally compelled for criminal investigations in public interest.

Can a person’s voice be protected under the Digital Personal Data Protection Act?

IT IS NOT clear whether ‘voice’ (recorded or cloned) will be considered as digital personal data, with the attendant explicit consent requirements for its use, under the yet to be implemented Digital Personal Data Protection Act, 2023 (DPDP Act). Personal data is defined as any data about an individual who
is identifiable by or in relation to such data. The said information, must, either directly or indirectly, be capable of identifying such an individual.

Does the Copyright Act offer any protection?

A ‘VOICE’ PER se cannot get copyright protection. However, an artist’s voice can be copyright protected as a performance right.
In several cases, the Delhi High Court has granted interim injunctions restraining named and unnamed parties from utilising well-known actors’ images, other aspects of their persona, including AI generated cloning of voices, for commercial purposes. It is important to note
that the bouquet of legal bases for seeking protection are as follows: personality rights including right to publicity; copyright in the dialogue, image, manner of speaking; and common law rights (torts) including the right to be protected against passing off, dilution and unfair competition.
The genesis of personality and publicity rights in India primarily arises from common law principles and judicial interpretations of copyright and trademark law, rather than explicit statutory provisions.

Do we need a new law for AI-powered deepfakes?

INDIA DOES NOT have any legal or regulatory framework to specifically regulate AI and address deepfakes. The government has been mulling the need for a separate regulation for AI and is apparently considering addressing some of the risks, like deepfakes, under the proposed Digital India Act. However, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 along with various advisories issued by the government require intermediaries and platforms to ensure that current AI technologies do not permit misuse of illegal content. It also requires labelling of synthetic content and removal of reported deepfake content by 24-36 hours upon receiving a report from either a user or government authority. In the event the platforms fail to comply, one can avail remedies under the Information Technology Act, 2000 and the Indian Penal Code.

The author is managing partner, Corporate Law Group, and chair, CII Sub-committee on IT & ITES

Follow us on TwitterFacebookLinkedIn





Source

Related Articles

Back to top button