Artificial Genocidal Intelligence: how Israel is automating human rights abuses and war crimes
Recent public discourse on artificial intelligence (AI) has been dominated by doomsday scenarios and sci-fi predictions of advanced AI systems escaping human control. As a result, when people talk about AI warfare, they tend to think of fully automated “killer robots” on the loose. What Israel’s war on Gaza has revealed, however, is that much more mundane and not particularly sophisticated AI surveillance systems are already being used to unleash dystopian, tech-driven horrors.
As recent media investigations have uncovered, Israeli AI targeting systems “Lavender” and “The Gospel” are automating mass slaughter and destruction across the Gaza Strip. This is the apotheosis of many AI rights-abusing trends, such as biometric surveillance systems and predictive policing tools, that we have previously warned against. The AI-enhanced warfare in Gaza demonstrates the urgent need for governments to ban uses of technologies that are incompatible with human rights — in times of peace as well as war.
Death from above: Gaza as an experimental tech laboratory
Israel’s use of AI in warfare is not new. For decades, Israel has used the Gaza Strip as a testing ground for new technologies and weaponry, which it subsequently sells to other states. Its 11-day military bombardment of Gaza in May 2021 was even dubbed by the Israeli Defense Forces (IDF) the “first artificial intelligence war.” In the current assault on Gaza, we’ve seen Israel use three broad categories of AI tools:
- Lethal autonomous weapon systems (LAWS) and semi-autonomous weapons (semi-LAWS): The Israeli army has pioneered the use of remote-controlled quadcopters equipped with machine guns and missiles to surveil, terrorize, and kill civilians sheltering in tents, schools, hospitals, and residential areas. Residents of Gaza’s Nuseirat Refugee Camp report that some drones broadcast sounds of babies and women crying, in order to lure out and target Palestinians. For years, Israel has deployed “suicide drones,” automated “Robo-Snipers,” and AI-powered turrets to create “automated kill-zones” along the Gaza border, while in 2021, it also deployed a semi-autonomous military robot named “Jaguar,” promoted as “one of the first military robots in the world that can substitute soldiers on the borders.”
- Facial recognition systems and biometric surveillance: Israel’s ground invasion of Gaza was an opportunity to expand its biometric surveillance of Palestinians, already deployed in the West Bank and East Jerusalem. The New York Times reported on how the Israeli military is using an expansive facial recognition system in Gaza “to conduct mass surveillance there, collecting and cataloging the faces of Palestinians without their knowledge or consent.” According to the report, this system uses technology from Israeli company Corsight and Google Photos to pick out faces from crowds and even from grainy drone footage.
- Automated target generation systems: most notably the Gospel, which generates infrastructural targets, Lavender, which generates individual human targets, and Where is Daddy?, a system designed to track and target suspected militants when they are at home with their families.
LAWS, and to a certain degree semi-LAWS, have been condemned by the UN as “politically unacceptable and morally repugnant,” and there are growing calls for them to be banned. The use of AI target-generation systems in warfare, coupled with biometric mass surveillance, warrants further attention, given how they demonstrate the devastating, even genocidal, wartime impact of technologies that should already be banned in peacetime.
Automating genocide: the fatal consequences of AI in warfare
While they may initially seem like a shocking new frontier, the use of targeting systems such as the Gospel or Lavender is in fact merely the apex of another AI system already used worldwide: predictive policing. Just as the Israeli army uses “data-driven systems” to predict who may be a Hamas operative or which building may be a Hamas stronghold, law enforcement use AI systems to predict which children might commit a crime or be part of a gang, or where to deploy extra police forces. Such systems are inherently discriminatory and profoundly flawed, with severe consequences for the people concerned. In Gaza, those consequences can be fatal.
When we consider the impact of such systems on human rights, we need to look at the consequences, first, if they malfunction and second, if they work as intended. In both situations, reducing human beings to statistical data points has grave and irreversible consequences for people’s dignity, safety, and lives.
When it comes to targeting systems malfunctioning, a key concern is that these systems are built and trained on flawed data. According to +972 Magazine’s investigation, the training data fed into the system included information on non-combatant employees of Gaza’s Hamas government, resulting in Lavender mistakenly flagging as targets individuals with communication or behavioral patterns similar to those of known Hamas militants. These included police and civil defense workers, militants’ relatives, and even individuals who merely had the same name as Hamas operatives.
As reported by +972 Magazine, even though Lavender had a 10% error rate when identifying an individual’s affiliation with Hamas, the IDF got sweeping approval to automatically adopt its kill lists “as if it were a human decision.” Soldiers reported not being required to thoroughly or independently check the accuracy of Lavender’s outputs or its intelligence data sources; the only mandatory check before authorizing a bombing was to ensure that the marked target was male, which took about “20 seconds.”
There is also no robust way to test such systems’ accuracy, nor to validate their performance. The process of verifying a person’s affiliation with Hamas is extremely complex, especially given the potentially flawed nature of the data such predictions are based on. It has been repeatedly shown that machine learning systems cannot reliably predict complex human attributes, such as “potential future criminality,” both because the data is inadequate and the systems rely on proxies (e.g. data about arrests as opposed to data on actual crimes committed), but also because it is simply not the case that “more data equals better predictions.”
Beyond such systems’ lack of accuracy or human verification, a more existential concern is how their use is fundamentally at odds with human rights, and the inherent human dignity from which those rights derive. This is demonstrated by the reality that Israel’s AI targeting systems are working just as intended; as the IDF has said, “right now we’re focused on what causes maximum damage.” Soldiers were reportedly pressured to produce more bombing targets each day and have allegedly used unguided missiles, or “dumb bombs,” to target alleged junior militants marked by Lavender in their homes. This, coupled with Israel’s use of AI to calculate collateral damage, has resulted in the mass killing of Palestinians and a level of destruction not seen since World War II, according to the UN.
The use of these AI targeting systems effectively offloads human responsibility for life-and-death decisions, attempting to hide an utterly unsophisticated campaign of mass destruction and murder behind a veneer of algorithmic objectivity. There is no ethical or humane way to use systems such as Lavender or Where is Daddy? because they are premised on the fundamental dehumanization of people. They must be banned — and we need to abolish the surveillance infrastructure, biometric databases, and other “peacetime tools” that enable such systems to be deployed in war zones.
Big Tech’s role in atrocity crimes
As discussed above, surveillance infrastructure developed and deployed during peacetime is easily repurposed during war to enable the worst human rights abuses. This brings into question the role of Big Tech companies in supplying civilian technologies that can be used for military ends — most notably the cloud computing and machine learning services that Google and Amazon Web Services provide to Israel through Project Nimbus. Additionally, it has been suggested that metadata from WhatsApp, owned by Meta, is being used to provide data for the Lavender targeting system.
By failing to address their human rights responsibilities, and continuing to provide these services to Israel’s government, companies such as Google, AWS, and Meta risk being complicit in aiding or abetting the Israeli military and intelligence apparatus and its alleged atrocity crimes in Gaza.
We cannot allow the development of mass surveillance infrastructure that can be used to produce targets in bulk, determine a “reasonable” number of civilian casualties, and ultimately abdicate human responsibility for life-and-death decisions. We reiterate our call on all governments to ban uses of AI that are incompatible with human rights, including predictive policing, biometric mass surveillance, and target generation systems such as Lavender. The systems Israel is using in Gaza, together with the government’s long-standing and ever-expanding mass surveillance lab, offer a glimpse into an even more dystopian future that cannot and should not ever be allowed to come to fruition.