How artificial intelligence is influencing elections in India
It has less than six months since Divyendra Singh Jadoun, the 31-year-old founder of an artificial intelligence (AI) powered synthetic media company, started making content for political parties in India. Within this short time he has risen to be known as the “Indian Deepfaker” as several political parties across the ideological spectrum reach out to him for digital campaigning.
Jadoun’s meteoric rise has a lot to do with the fact that close to a billion people are voting in India’s elections, the longest and largest in the world, which started last month. He says he doesn’t know of a single political party that hasn’t sought him out to enhance their outreach. “They [political parties] don’t reach out to us directly, though. Their PR agencies and political consultants ask us to make content for them,” said Jadoun, who runs the AI firm Polymath, based in a small town known for its temples in the north Indian state of Rajasthan and which has nine employees.
In India’s fiercely divided election landscape, AI has emerged as a newfound fascination, particularly as the right-wing ruling Bharatiya Janata Party (BJP) vies for an unusual third consecutive term. The apprehension surrounding technology’s capabilities in a nation plagued by misinformation has raised concerns among experts.
Jadoun says his team has been asked many times to produce content which they find highly unethical. He has been asked to fabricate audio recordings that show rival candidates making embarrassing mistakes during their speeches or to overlay opponents’ faces onto explicit images.
“A lot of the content political parties or their agents ask us to make is on these lines, so we have to say no to a lot of work,” Jadoun told Index on Censorship.
Certain campaign teams have even sought subpar counterfeit videos from Jadoun, featuring their own candidate, which they intend to deploy to discredit any potentially damaging authentic footage that surfaces during the election period.
“We refuse all such requests. But I am not sure if every agency will have such filters, so we do see a lot of misuse of technology in these elections,” he says.
“What we offer is simply replacing the traditional methods of campaigning by using AI. For example, if a leader wants to shoot a video to reach out to each and every one of his party members, it will take a lot of time. So we use some parts of deep-fakes to create personalised messages for their party members or cadres,” Jadoun adds.
Pervasive use
India’s elections are deeply polarised and the ruling right-wing BJP has employed a vicious anti-minority campaign to win over the majority Hindu voters- who roughly form 80% of the electorate. The surge in use of AI reflects both its potential and the concerns, amidst widespread misinformation. A survey by cybersecurity firm McAfee, taken last year, found that over 75% of Indian internet users have encountered various types of deepfake content while online.
Some of the most disturbing content features various dead politicians have been resurrected through AI to sway voters. Earlier this year, regional All India Anna Dravida Munnetra Kazhagam Party’s (AIADMK) official account shared an audio clip featuring a virtual rendition of Jayalalithaa, a revered Tamil political figure who passed away in 2016. In the speech, her AI avatar aimed to inspire young party members, advocating for the party’s return to power and endorsing current candidates for the 2024 general elections.
Jayalalithaa’s AI resurrection is not an isolated case.
In another instance, just four days prior to the start of India’s general election, a doctored video appeared on Instagram featuring the late Indian politician H Vasanthakumar. In the video, Vasanthakumar voices support for his son Vijay Vasanth, a sitting Member of Parliament who is contesting the election in his father’s erstwhile constituency.
The ruling Bharatiya Janata Party (BJP), known for its use of technology to polarise voters, has also shared a montage showcasing Prime Minister Modi’s accomplishments on its verified Instagram profile. The montage featured the synthesized voice of the late Indian singer Mahendra Kapoor, generated using AI.
Troll accounts subscribing to the ideology of different political parties are also employing AI and deepfakes to create narratives and counter-narratives. Bollywood star Ranveer Singh in a tweet last month cautioned his followers to be vigilant against deepfakes as a manipulated video circulated on social media platforms, where Singh appeared to criticise Modi. Using an AI-generated voice clone, the altered video falsely portrayed Singh lambasting Modi over issues of unemployment and inflation, and advocating for citizens to support the main opposition party, the Indian National Congress (INC). In reality, he had praised Modi in the original video.
“AI has permeated mainstream politics in India,” said Sanyukta Dharmadhikari – deputy editor of Logically Facts, who leads a team of seven members to fact-check misinformation in different vernacular languages.
Dharmadhikari says that countering disinformation or misinformation becomes extremely difficult in an election scenario as false information consistently spreads more rapidly than fact-checks, particularly when it aligns with a voter’s confirmation bias. “If you believe a certain politician is capable of a certain action, a deepfake portraying them in such a scenario can significantly hinder fact-checking efforts to dispel that misinformation,” she told Index on Censorship.
Selective curbs
Amidst growing concerns, the Indian government rushed to regulate AI by asking tech companies to obtain approval before releasing new tools, just a month before elections. This is a substantial shift from its earlier position when it informed Indian Parliament of not interfering in how AI is being used in the country. Critics argue that the move might be another attempt to selectively weigh down on opposition and limit freedom of expression. The Modi government has been widely accused of abusing central agencies to target the opposition while overlooking allegations involving its own leaders or that of its coalition partners.
“There needs to be a political will to effectively regulate AI, which seems amiss,” says Dharmadhikari. “Even though the Information Ministry at first seemed concerned at the misuse of deepfakes, but gradually we have seen they have expressed no concerns about their dissemination especially if something is helping [PM] Modi,” she added.
Chaitanya Rohilla, a lawyer based in Delhi, who initiated a Public Interest Litigation (PIL) at the Delhi High Court concerning the unregulated use of AI and deepfakes in the country believes that as technology unfolds at breakneck speed, the need for robust legal frameworks to safeguard against AI’s emerging threats is more pressing than ever.
“The government is saying that we are working on it…We are working on rules to bring about or to specifically target these deepfakes. But the problem is the pace at which the government is working, it is actually not in consonance with how the technology is changing,” Rohilla told Index on Censorship.
Rohilla’s PIL had requested the judiciary to restrict access to websites that produce deepfakes. The proposal suggested that such websites should be mandated to label AI-generated content and be prohibited from generating illicit material.
But Indian courts have refused to intervene.
“The information Technology Act that we have in our country is not suitable; it’s not competent to handle how dynamically the AI environment is changing. So as the system is unchecked and unregulated it (deepfake dissemination) would just keep on happening and happening.”