AI Enabled Genocide: Israel’s Invasion of Gaza

AI Enabled Genocide: Israel’s Invasion of Gaza

For the last eight months, discourse surrounding Israel’s invasion of the Gaza Strip has been orbiting around the topic of Artificial Intelligence (AI). Deploying a number of AI-enabled technologies during the invasion of the Gaza Strip, the Israeli Occupation, has, in particular, made use of what have been called ‘Targeting AI’ to aid in their professed goal of ‘destroying Hamas’. Information, brought to light by a new report from +972, reveals the parameters set by Israeli authorities and the application of ‘Targeting AI’, are in clear violation of international law. Enabled by a political culture and policies that illustrate disdain for Palestinian civilians, the application of these ‘Targeting AI’ softwares operate at the nexus of predictive policing, biometric surveillance and systems of Apartheid which have resulted in the complete destruction of civilian life, and the deaths of tens of thousands of innocent men, women and children in the Gaza Strip. 

 

Revealed by the +972 report, through leaks and interviews with anonymous soldiers active during Israel’s latest genocidal campaign, as established by Francesca Albanese's (Special Rapporteur for the Occupied Palestinian Territories) Anatomy of a Genocide, tens of thousands of Palestinians in Gaza are being marked for assassination through the ‘Lavender’ and ‘Where’s Daddy?’ programmes. ‘Lavender’ is yet another software developed to fast-track the selection and pin-pointing of so-called targets for assassination, impACT have reported on a number of these programmes since the inception of Israel’s assault on Gaza, including Fire Factory, which calculated the military logistics for striking targets. 

 

What is highly concerning about the application of the ‘Lavender’ programme, aside from the killing of tens of thousands of men, women and children: it is apparent that the programme has minimal, even negligible meaningful human control (MHC) and oversight, placing full control within AI programmes for the selection of assassinations (though it does not automatically carry them out). MHC has been a central focus of international deliberations on the use of AI-enabled weaponry or Lethal Autonomous Weapons Systems (LAWS) for some time. The notion of MHC and its proper application to LAWS is a highly debated topic, which receives particular critical attention from the nations who are building such weaponry themselves. The desire for MHC over contemporary and future laws is of course a critical issue for human beings globally, removing human-decision-making from warfare will have drastic consequences. There are essentially, two premises, which are most adequately laid out by Article 36, and NGO focused on harm reduction from weaponry: 

 

  1. A machine applying force without any human control whatsoever is broadly considered unacceptable
  2. A human simply pressing a ‘fire’ button in response to indications from a computer, without cognitive clarity or awareness, is not sufficient to be considered ‘human control’ in a substantive sense.

 

The lack of MHC is rather reflective of Israel’s opposition to the establishment of international legal structures that place proper human oversight over AI-enabled weaponry, as has been made clear by the attitudes of Israeli delegates presented at the numerous international conventions, including the recent discussions with the Committee on Certain Conventional Weapons (CCW) Group of Government Experts (GGE) summit. 

 

Further, the unearthing of ‘Where’s Daddy?’, which adds to the huge amount of footage and testimonials showing indiscriminate bombings and shootings: the Israeli authorities have given wide approval to bomb civilian targets that may house operatives and their families according to ‘Targeting AI’. They have also established a highly illegal parameters for the number of murdered civilians acceptable to the Israeli authorities per one dead suspected-operative. The use of both ‘Lavender’ and ‘Where’s Daddy?’ is not only in clear violation of International Humanitarian Law and conventional warfare, but also the  ‘Guiding Principles’ established  by the CCW in 2019 on acceptable use of AI technologies.  

 

impACT International expresses serious concern that, due to the highly publicised use of AI-enabled weaponry, culpability for the mountain of violations of international law and war crimes committed, will be attributed to issues with software, rather than those responsible for the application of policy and the “sweeping approval” of it’s use. The international community must adequately assess Israel’s use of ‘Targeting AI’ programmes, and ensure that there is no “AI-Washing” when it comes to culpability for these crimes. 

 

Once again, these revelations illustrate that Palestinians form a vital component, as test subjects, for the “world’s tenth largest” export market for military goods, and the 120 so-called ‘defence’ companies present in Israel. All parties, including global companies continuing to provide weapons and equipment to the Israeli military, must cease their aiding of these genocidal actions. The international community must hold them accountable. 

 

Israel: Attitudes to Lethal Autonomous Weapons Systems and AI use in battle-scenarios

 

Israel, much alike other nations invested in the creation of AI weaponry or LAWS, has been incredibly vocal in it’s opposition to many of the regulations posed by UN officials, experts, civil society and nation states. In a statement that encapsulates the a wide consensus on the dangers of the use of these technologies, in 2019, UN Secretary General Antonio Guterres stated that the development and use of LAWS were “politically unacceptable and morally repugnant”. This is not a view shared by Israeli authorities whose position can be most aptly illustrated by the Israeli delegations attitudes to the March 2024 Committee on Certain Conventional Weapons (CCW) discussion, with the Group of Government Experts (GGE), on the use of AI in battle-scenarios. 

 

In a report, written by WILPF, Israel’s delegation was noted as one of a number of “bad actors”, who, whilst participating in the process “clearly [had] no intention of ever allowing the formal negotiation” of proper regulatory infrastructure at the CCW. “Israel’s delegation asserted that prohibitions on the use of certain weapons and other rules of international humanitarian law should not be conflated with how weapons can be used in specific contexts.”

 

Attendees, particularly those from the State of Palestine, adamantly argued that it was this exact attitude and failure to establish new international rules, that had allowed Occupational forces to carry out clear and well documented atrocities in the Gaza Strip over recent months, armed with new AI-enabled technologies. Rejecting this notion, Israeli delegates attempted to sever ties between the discussions at the GGE committee and the ongoing genocide of Palestinians, claiming that there was “no relation between the type of systems being discussed at this GGE and the systems referenced in Palestine”. Interestingly, and in stark contrast to reports from it’s own intelligence officers: the delegation stressed that it is not using an AI system that autonomously chooses targets for attack without MHC.

 

Though there is not yet a clear international regulatory framework specific to LAWS on any AI-enabled weaponry due to significant critical opposition from states developing such weapons, like the US, Israel and Russia, there are guiding principles, which were laid out in 2019. At a Meeting of Contracting Parties of the CCW, a number of clear parameters to development and use, established as the groundwork for later legal framework, established Meaningful Human Control as a foundational notion for the regulation of LAWS. The meeting culminated in the description of a number of ‘Guiding Principles’. Some of these are: 

 

  1. International humanitarian law continues to apply fully to all weapons systems, including the potential development and use of lethal autonomous weapons systems 
  2. Human responsibility for decision on the use of weapons systems must be retained since accountability cannot be transferred to machines. This should be considered across the entire lifecycle of the weapons system
  3. Human-machine interaction, which may take various forms and be implemented at various stages of the lifecycle of a weapon, should ensure that the potential use of weapons systems based on emerging technologies in the area of LAWS is in compliance with applicable international law, in particular IHL.

 

A particularly pertinent issue to the ongoing use of ‘Targeting AI’ by the Israeli military is automation bias. Automation bias, a “specific class of errors people tend to make in highly automated decision making contexts” where human operatives sub-consciously defer to the decisions made by whatever system is used. Operators can be overly confident of the reliability and accuracy of the information they see on their screens. Worse, human operators working under chain of command, will be trained and directed to trust the systems software implicitly. This is a particularly pertinent issue when decision-making hierarchies, such as military authorities, order quick action on targets. This is a clear issue we are seeing played out by use of AI-enabled targeting systems in Israel’s genocidal actions in the Gaza Strip.

 

‘Lavender’ Programme

 

As reported by +972, it was recently revealed by five sources, that a high-ranking member of the Israeli militaries Elite Intelligence Unit 8200, had written a book titled: “The Human-Machine Team: How to Create Synergy Between Human and Artificial Intelligence That Will Revolutionise Our World”. The book, credited to ‘Brigadier General Y.S’, discusses the limitations of human target selection and the ‘bottleneck’ in authorisation when confirming these targets. The development of the ‘Lavender’ programme, can be seen as the product of this desire to remedy the issue of the ‘bottleneck’ of authorising targets for assassination. 

 

The programme, according to sources within the Israeli military, marks all suspected operatives in the military wings of Hamas and PIJ for assassination. It’s operative use, however, illustrates that there is little MHC and oversight, by design. Six sources from within the Israeli military, all intelligence officers who have served during the ongoing ethnic cleansing of the Gaza Strip, spoke of the “sweeping approval for officers to adopt Lavender’s kill list, with no requirements to thoroughly check” any of the data, or individuals marked. This is despite admittance that ‘Lavender’ has a “10% error rate” whilst identifying individuals. Reportedly, “human personnel often served only as a ‘rubber stamp’ for the machines decisions”, with no established system of checking ‘Lavender’s’ decisions, understanding it’s opaque rationale and displaying clear automation bias. When decisions were checked, officers devoted “only about ‘20 seconds’ to each target before authorising the bombing”. 

 

Not only does this illustrate a clear disregard for both international law and for the lives of Palestinians, it sets incredibly dangerous precedent for the use of AI in battle scenarios. Additionally, from the hundreds of videos circulating online of the Israeli military dropping various munitions on civilian residences, it is clear that, whilst they are using targeting systems for suspected Hamas and PIJ operatives, this information is not being used in combat scenarios, but rather to strike civilian infrastructure.

 

Critically, it is important to note that whilst there are clear concerns over the error margins, and this is highly concerning, the application of outputs from these ‘Targeting AI’ and the foundational attitude that Palestinian civilian life is superfluous to military objectives are vital to the creation of an environment where such atrocities can occur. 

 

Where’s Daddy?Programme

 

With ‘Lavender’ being used to target individuals, the Israeli Occupation Forces also use what they have dubbed “Where’s Daddy?’. Damning to rhetoric that suggests the military are “making effort” to protect civilians, the software is used to specifically track targeted individuals and carry out bombings when they had entered their families residences. According to +972’s report, and in line with much of the footage circulating, this was a primary military objective: 

 

“We were not interested in killing [Hamas] operatives only when they were in military buildings or engaged in military activity. On the contrary, the IDF bombed them in their homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.” 

 

This is in clear violation of Rule 1 of International Humanitarian Law, which demands “parties to the conflict must at all times distinguish between civilians and combatants. Attacks may only be directed against combatants. Attacks must not be directed against civilians”. Further, this policy encouragesindiscriminate bombardments”, which are an attack which treats a single military objective located in a civilian area, as one legitimate target. This not only adds to the clear evidence of genocidal acts in the Gaza Strip, but further legitimises accusations that this is part of a wider policy objective in Palestine.  

 

Two of the sources told +972 that the military, within the few weeks of the invasion, that “for every junior Hamas operatives that Lavender marked, it was permissible to kill up to 15 or 20 civilians”. Additionally, if the target occupied a more senior position, like Battalion or Brigade Commander, “the army on several occasions authorised the killing of more than 100 civilians in the assassination of a single commander”. Commenting on the assassination of “ground soldiers” or what the anonymous soldier called “garbage targets”, the Israeli operative “still … found them more ethical [assassinations] than the targets that [they] bombed just for ‘deterrence’ — high rises that are evacuated and toppled just to cause destruction”. Once again, impACT would like to stress that these are clear violations of international law and war crimes. The establishment of ‘acceptable’ civilian assassinations is a clear contravention of the entire humanitarian foundation for the Geneva Convention, as is the destruction of civilian objects. 

 

The combination of the use of ‘Lavender’ and ‘Where’s Daddy?’ are a clear contravention of international law, authorities responsible for this policy implementation, officers who carry out these orders, and companies who knowingly provided the military with technologies that enabled these war crimes must face legal repercussions.

 

Facial Recognition Software

 

A concerning, though certainly not a new contributor to the systems of surveillance that feed ‘Targeting AI’ softwares like ‘Lavender’, is the wide deployment of biometric surveillance technologies, in particular, facial recognition software. 

 

Facial recognition has been deployed, particularly in the West Bank, in the Occupied Palestinian Territories for some time.  Along highly militarised border walls and checkpoints which intersperse the Occupied land, various softwares are active in this process which Amnesty International have called “Automated Apartheid”. In the ‘H2’ segregated portion of Hebron, the Wolf Pack database, just one of the myriad biometric surveillance technologies deployed, is used to scan and score the data of Palestinians which is then used to sequester their ability to move between military checkpoints established by the Occupying power. Red Wolf, a part of the wider programme, scans the faces of Palestinians, without consent or even, their knowledge. It essentially determines whether individuals can pass a checkpoint, if there are no biometric records of an individual Palestinian, “they will be denied passage”. Speaking to The Guardian, according to Dr Matt Mahmoudi, an advisor on AI and Human Rights at Amnesty International, the Israeli military have ‘gamified’ this authoritarian practice. Using a smart phone application linked to the software, named Blue Wolf, soldiers access information on Palestinians using their phones and if they log new biometric data, like scanning faces of Palestinians, their unit are awarded points.

 

Since the Israeli invasion of the Gaza Strip, they have looked to deploy similar facial recognition softwares to harvest data from Palestinians fleeing the ethnic cleansing. This was made evident when Palestinians, fleeing the Occupation’s military forces who had ordered civilians to move to the South of the Gaza Strip, were filmed passing through checkpoints with facial recognition technology set up. Whilst travelling through chokepoints on their way to supposed ‘safe zones’, in Khan Younis and now Rafah, both of which are being heavily bombarded, resulting in tens of thousands of civilian deaths, people were forced to pass through these checkpoints which contained cameras and software used to log biometric data which will certainly feed the targeting of Palestinians now sheltering in Rafah. 

 

According to reports from the New York Times, the military are using Google Photos and, Israeli technology company, Corsight AI’s software to process and surveil the population of Gaza using footage, which, according to the companies advertising can “accurately identify people even if less than 50% of their face is visible”. The company has been eager to aid in the Israeli militaries concerning actions: in The Jerusalem Post, investor Aaron Ashkenazi, stated they were eager to provide the military “with the technological tools to stop these evil terrorists in their tracks”. During his evacuation to the South of the Strip, renowned Palestinian poet, Mosab Abu Toha was identified as a suspected operative at a checkpoint. After being detained, and subject to beatings from the Israeli forces, he was released from detention. Not only does this illustrate that these technologies have clear flaws in their output, but that the Israeli forces are unwilling to properly investigate their own accusations and continue to persecute Palestinians. 

 

The Deployment of Military Robotics

 

A topic that has come under some scrutiny has been the deployment of Robotics in the ethnic cleansing of Gaza. Though this lends to the notion that the future of military technology may involve roaming ‘killer robots’, it is the more mundane ‘Targeting AI’ that has illustrated a far more dystopic vision of what the future will hold. Nonetheless, the deployment of robotics in the Gaza Strip warrants attention.

 

Ghost Robotics, of Philadelphia, United States, a company headquartered in the University of Pennsylvania’s “Pennovation Works” office and lab complex, has been supplying the Occupational forces with “Vision 60 Robot Dogs” for deployment. The robotic quadruped, which cost $165 000 per unit, are fitted with surveillance technologies to further aid in the collection of data for the invasion of the Gaza Strip. 

 

Ghost Robotics has come under significant pressures from local sources in Philadelphia to cease their provision of goods to a regime committing acts of genocide and ignoring international law. Unlike many of the companies housed within the university labs, like Boston-Dynamics, Ghost Robotics have refused to sign on to pledges to not weaponise any of their produced devices. In critical opposition to the companies aiding of Israeli ethnic cleansing, Daniel Koditschek, a roboticist who trained and taught those who founded the company in 2015 (Avik De, Gavin Kenneally, Jiren Parikh), released a statement demanding that the company removes references to him and his group from their website and literature. It states that their work was an “affront” to his research, and “corrupted the very aims and nature of robotics research”. Students at the University of Pennsylvania have demanded the university “boot the firm” from its campus. Additionally, the collective named ‘Shut Down Ghost Robotics’, lamented that their “tax dollars are subsidising and that our campus is supporting the manufacture of these robotic dogs:. 

 

Whilst the Vision 60 Robotic Dogs are not the ‘killer robots’ that are so often imagined when discussing the role of LAWS or at least AI-enabled weaponry, their deployment only serves to further illustrate how the arms industry and the Israeli state view the Occupied Palestinian Territories: an area to test new and emerging technologies on a segregated population that have no domestic rights. 

 

AI-Washing

 

Due to the nature of the software deployed during Israel’s invasion of the Gaza Strip, impACT expresses significant concern that culpability for these genocidal policies, and the war crimes committed by the Israeli regime, will be attributed to improper or faulty software. What has been revealed by +972 reports is that, whilst there is minimal MHC, which is in of itself a contravention of the norms laid out by the CCW’s 2019 ‘Guiding Principles’, it is the wider application of policy that has created the conditions for these actions to be permitted. 

 

Evidenced by the actions of the Israeli Occupation, and reports concerning the application of AI targeting systems, Palestinian civilians, men, women and children, are being treated as legitimate targets for assassination in what can only be described as genocidal actions. The ‘acceptable amount’ of civilians killed when assassinating individual operatives, particularly when Israeli intelligence is clearly aware of their presence, is akin to assassinating civilians and are clear war crimes. This is not only a consistent policy across aerial bombardment targets, but illustrated by the discovery of mass graves (of the total of 140 found) outside of Nasser and Al Shifa Hospital with “over 390 bodies … including women and children, with many reportedly showing signs of torture and summary executions, and potential instances of people buried alive”, it is a wider policy across all military actions. The culpability for these actions lies in the laps of the Israeli authorities, and the soldiers who carried out the orders. The international community must act accordingly.

 

Conclusion

 

The international community must act now to prevent the further ethnic cleansing of the Gaza Strip, particularly as the full scale invasion of Rafah looms. With a high concentration of civilians, now squashed into Rafah and it’s surrounding areas, continuous air strikes are killing civilians on a daily basis. The active, well-documented, disdain that the Israeli authorities have for Palestinians civilians means that any invasion will surely result in yet more war crimes. 

 

As stated above, the international community is responsible for holding Israeli authorities, and it’s soldiers, accountable for these actions. Additionally, the companies aiding the Israeli military, such as Google, Ghost Robotics and Corsight AI, knowing that they are carrying out ethnic cleansing in the Gaza Strip, must also be held accountable as they clearly are failing their responsibilities to human rights norms. Mass surveillance infrastructure being used to target individuals in non-combatant environments must not be allowed to develop unimpeded. impACT, in line with experts, UN officials, and other NGO’s like Access Now, demands that AI technologies are banned for use in any battle/war-scenarios. 

 

Given Israeli intent on invading Rafah, with over 1 million people now sheltering in the area, the safety of Palestinians and the prevention of further ethnic cleansing is of paramount concern. With the wide deployment of surveillance technologies in Gaza, if the ground invasion goes ahead, impACT is highly concerned that these technologies will undoubtedly result in more unwarranted kidnapping, torture and deaths of civilians. Further, with operational attitudes, which treat the killing of civilians as a necessary action when targeting Hamas and PIJ operatives, it is very likely we will see Israeli authorities carrying out yet more war crimes.

Related

Virtual Auctions of Human Labour: UAE Domestic Workers & Social Media

An investigation into the advertisement of domestic workers on social media reveals disturbing realities.

Worrying trends in US indicate proliferation of child labour and slav...

Both instances of child labour and violations of child labour related laws are on the rise in the US, more must be done to prevent further proliferation.

The Dangerous Cocktail of China: Emerging Economic Power and Blatant...

China tries to overshadow the blatant human rights violations going on in its territory with the help of its economic power.