The realities of IDF using AI in the battlefield - opinion

The moral and practical realities of the IDF admitting to using Artificial Intelligence (AI)

 Illustrative image of AI.  (photo credit: PIXABAY)
Illustrative image of AI.
(photo credit: PIXABAY)

In recent weeks, high-ranking officers in the Israeli Defense Forces (IDF) told the press that Israel is deploying Artificial Intelligence (AI) tools as part of its military arsenal. As was published, the IDF uses AI apparatuses that assist in offensive decision-making, for example, if a target is a military or a civilian one. In addition, some defensive tools are used, like ones that alert forces that they are under threat of a rocket or a missile or that assist in better safeguarding border movement.

It seems that the AI explosion in the public sphere, with the introduction of Chat GPT3 and the announcement of Microsoft that it will add an AI toolbar to Bing, might have influenced the IDF in declaring openly its novel use of AI. The decision to come out in the light in this regard comes to assert technical supremacy and it has value in terms of deterrence and still, is this the time or the fashion to do so?

The short answer is no. There is room for prudence when deploying new military capabilities, especially ones that are not regulated like AI-based tools. At this point in time, there is no benchmark to follow and by which to measure the IDF’s activities. In fact, states are still trying to grasp and regulate tools like intrusive software or malware, let alone AI-based ones, given the pace of developments in the digital sphere.

More broadly, there are deep disagreements that are rooted in the different perspectives and values of states. Israel, at least in terms of technological supremacy, views itself – and rightly so – as well in a position to promote the use of innovative tools that provide them with a technical edge. Notwithstanding, there are three risks to such an approach.

The use of AI

FIRST, STATES are required to use prophylactic impact assessment measures, like legality review of weapons and also of means and methods of warfare, but it seems that the IDF is engaged in a process of a trial and error on the battlefield. The tendency to lean on AI is obvious, as such a tool can calculate in a few seconds some things that humans will need weeks to do, if at all.

 Artificial intelligence chatbots like ChatGPT are changing the world (Illustrative). (credit: PIXABAY)
Artificial intelligence chatbots like ChatGPT are changing the world (Illustrative). (credit: PIXABAY)

Yet, so long as AI tools are not explainable, in the sense that we cannot fully understand why they reached a certain conclusion: how can we justify to ourselves to trust the AI decision when human lives are at stake?

The statements admitted that some of the targets attacked by the IDF are produced by AI tools. If, God forbid, one of the attacks produced by the AI tool leads to significant harm to uninvolved civilians, who should be responsible for the decision?

Second, the decision of the IDF to admit that it relies on companies from the military industries in the exploration of AI tools with a military application seems surprising. This is because Israel faces criticism over sales by Israeli companies of offensive cyber tools to non-democratic regimes that use them in order to suppress political resistance, monitor journalists and more.

Revelations concerning the use of NSO’s Pegasus even incentivized some, like former UN special rapporteur David Kaye, to call for a complete ban of offensive cyber tools, until there is international regulation of their use.


Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


Third, against who is this technology deployed and when? Are these tools deployed against a counterpart that is also tech-savvy, say Iran or is it part of the management of territories in Judea and Samaria? The context matters and it impacts the perception that will be developed in relation to these tools. Also, admitting that Israel used AI in the battlefield invites and justifies the reciprocal use of such tools against Israel. Other states can also rely on these statements to deploy AI tools in other contexts, for example, in the Russia-Ukraine conflict.

One encouraging thing is that it seems that the IDF tried to use tools as complementary to human decision-making and not in a way that substitutes the human factor. It is important to maintain a human in the loop in order to promote accountability and also since we are not fully aware of the capabilities and risks of using AI tools. In this regard, Israel is setting a positive example that should be followed.

The writer is the program director at Tachlith Policy Institute.