Skip to content

The Era of AI-Generated War Crimes

Israel's Lavender AI targeting in Gaza should serve as a reminder: It's past time to consider regulating AI's military applications.

Words: Bree Megivern
Pictures: WAFA
Date:

Last month, leaders from 10 countries and the European Union met to discuss emerging AI technology. The Summit, co-hosted by the United Kingdom and South Korea, is the first landmark meeting following the Bletchley Declaration, the international agreement calling for accountability and international cooperation to ensure AI safety. During the Seoul AI Summit, participants committed to building a network of safety institutes to monitor AI development. The summit focused on the safe use of AI, but attendees did not address the elephant in the room: Discussion of AI safety must include plans for regulating AI’s military applications. After all, Israel’s Lavender AI program is already demonstrating the catastrophic consequences of low-regulation AI during wartime.

Since the Gaza offensive began, Israel has leveraged AI in its military campaign, blurring the line between human and machine decision-makers. If global leaders do not take meaningful steps toward regulating military-specific AI, their overall efforts for AI safety will be ineffective. Examining Israel’s use of AI in war provides useful insight for the US and Bletctchley signatories as they discuss guardrails on the future of AI.

Lavender AI

Information first shared with +972 Magazine exposes how Israel is using Lavender, an AI targeting system, to identify and target suspected Hamas militants. Israeli Defense Forces (IDF) have been using the database since before the Oct. 7 attack. Lavender is fed information gathered by mass surveillance in the Gaza Strip, and then uses that information to rank individuals on a scale of 1-100 to indicate the likelihood of the individual being a Hamas militant.

Prior to Oct. 7, the IDF used Lavender to identify human targets, but used narrow criteria to identify high-ranking members of Hamas. After Lavender identified a suspect, the IDF cross-referenced the target recommendation with other intelligence before deciding if they could legitimately “incriminate” the target. According to The Guardian, after Oct. 7, Israel designated all Hamas operatives — regardless of their rank — as a target, and Lavender’s target identification criteria were expanded to mirror the policy. This resulted in Lavender generating a kill list of 37,000 targets – even though pre-war estimates from US and Israeli intelligence communities estimated Hamas had only 25,000-30,000 militants.

If Lavender is indicative of what a future with unregulated AI military applications looks like, world leaders must make progress on regulation before it’s too late.

One source described intense pressure from superiors to sign off on Lavender’s preselected targets faster. Another source explained, “when it comes to a junior militant you don’t want to invest manpower and time in it,” justifying civilians being mistakenly targeted. The order to approve as many targets as possible meant some soldiers were spending roughly 20 seconds assessing a target, acting only as a “stamp of approval.” If during those 20 seconds, the Lavender-generated target was determined to be a woman, the preauthorized attack would be canceled. When discussing the experience of using Lavender, one user explained that so many intelligence officers were grieving from losing loved ones in the Oct. 7 attack that Lavender making decisions “coldly” made the work “easier.” In response to the allegations, the IDF maintains they are using Lavender as an analytical tool, not a decision maker, and that the IDF operates within the confines of international law. 

The sheer volume of targets Lavender generated, combined with the emotional toll of Oct. 7 on IDF soldiers, may incentivize the IDF to ignore the principles of proportionality and precaution. As a result, the humans behind the machines may be emotionally distanced from their work and continue raising the civilian death toll with each button click. If Lavender is indicative of what a future with unregulated AI military applications looks like, world leaders must make progress on regulation before it’s too late. There are many paths toward regulating AI’s military applications, but expanding international law and building upon arms trade regulations are good places to start. 

International Humanitarian Law for AI

Current international humanitarian law (IHL) requires militaries to assess targets based on three principles: distinction, proportionality, and precaution. In a step forward, the international community could codify the emerging norm of “meaningful human control,” requiring a clear definition l and prompting further discussions on legal limitations to military-specific AI. Critics will posit IHL is unenforceable and non-binding, followed only by willing states. However, the nuclear taboo shows that clearly defined limits can be a powerful deterrent. With clearly defined acceptable and unacceptable uses, the international community can promote self-regulation and prevent AI from becoming a tool for mass atrocities.

To regulate AI’s military applications, slowing or preventing the proliferation of weaponized technology is crucial. Since “artificial intelligence” covers various technologies, effective regulation requires distinguishing between civilian and dual-use military applications. The Wassenaar Arrangement, a voluntary international agreement to regulate the transfer of conventional arms and dual-use technology, has already started this process by listing some AI technologies like intrusion software as controlled weapons. However, the Arrangement has not yet grown to include dual-use applications such as autonomous drones or machine learning algorithms. Signatories of the Arrangement must define dual-use technology and establish mechanisms for continuously reviewing the control list as AI evolves.

Efforts to regulate military applications of AI require policymakers to have a nuanced understanding of application-specific AI software, arms trade regulations, and AI’s military use cases. Coupling that with trying to get international cooperation on a security issue means regulation will not happen overnight. As world leaders think about their own roles in ensuring AI safety, they should also begin the difficult process of regulating AI’s military applications — lives depend on it.

Bree Megivern

Bree Megivern has a Master‘s degree in Security Studies from Georgetown University. Her areas of interest are US foreign policy, international security, and global development.

Hey there!

You made it to the bottom of the page! That means you must like what we do. In that case, can we ask for your help? Inkstick is changing the face of foreign policy, but we can’t do it without you. If our content is something that you’ve come to rely on, please make a tax-deductible donation today. Even $5 or $10 a month makes a huge difference. Together, we can tell the stories that need to be told.

SIGN UP FOR OUR NEWSLETTERS