ACRI demands IDF explain use of AI targeting system

The IDF uses an AI system crafted to identify potential targets for military attacks.

 The IDF’s Digital Transformation Division (photo credit: IDF SPOKESPERSON UNIT)
The IDF’s Digital Transformation Division
(photo credit: IDF SPOKESPERSON UNIT)

The Association for Civil Rights in Israel (ACRI) has submitted a Freedom of Information request to the IDF Spokesperson legal division demanding more transparency regarding the use of artificial intelligence (AI) for targeting.

ACRI’s move is typical for lining up a later move to the High Court of Justice to declare some military practices, in this case using AI for targeting, as illegal.

The request centers on a specific AI system known as “HABSORAH” (The Gospel) System, developed by the IDF.

This AI system is crafted to identify potential targets for military attacks. According to military publications, The Jerusalem Post, and other media, the “HABSORAH” System is designed to “convert vast amounts of irregular intelligence information into actionable recommendations for targeted attacks.”

An Israeli Air Force F-15 fighter jet flies during an aerial demonstration at a graduation ceremony for Israeli airforce pilots at the Hatzerim air base in southern Israel June 30, 2016. (credit: AMIR COHEN/REUTERS)
An Israeli Air Force F-15 fighter jet flies during an aerial demonstration at a graduation ceremony for Israeli airforce pilots at the Hatzerim air base in southern Israel June 30, 2016. (credit: AMIR COHEN/REUTERS)

“It employs an algorithm drawing data from diverse sources, processing it to propose potential targets at an accelerated rate,” said ACRI.

Understanding the HABSORAH system, and the ability of intelligence decision-makers to comprehend its output and justify it, is crucial for compliance with the laws of war, said ACRI.

The Post has exclusively spoken to sources familiar with the IDF’s use of AI and the balance it strikes with human legal approvals for targeting. It is claimed that the human approval element is still a critical part of the process.

AI use for targeting raises 'substantial' ethical, legal questions

The NGO noted that, “The deployment of AI systems, which involve a certain degree of autonomy in information processing, raises substantial technological, ethical, and legal questions. The use of AI systems capable of making decisions in sensitive areas or potentially violating human rights is a cause for genuine concern.”

Next, a statement from ACRI said, “These concerns arise from the ‘dark side’ of AI, manifested in potential acceptance of incorrect decisions due to machine errors and severe biases.”

“The reliance on complex algorithms that lack transparency may create a ‘black box,’ where critical decisions are made by computers without human comprehension of the underlying considerations or evaluation of the reliability of conclusions,” warned the statement.


Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


Moreover, the human rights group wrote, “The inherent lack of transparency in AI technology has prompted some researchers to recommend limitations on AI use in decisions affecting human rights.”

In fact, ACRI remarked that the EU is moving toward formal legislation to regulate the use of AI systems, “especially those impacting human rights, necessitating special transparency measures and stringent regulation.”

Further, the statement said that human rights concerns “intensify when applied to AI systems in intelligence, as they thrive on large-scale, high-quality information and prolonged supervised learning. Real-time intelligence on the battlefield relies on frequently changing, new information.”

During the current war, “decisions to incriminate a building or location for attack, while aiming to gain a military advantage and shape the campaign, can pose life-and-death risks. Such decisions not only endanger innocent [Gazan] civilians, but also Israeli soldiers and the hostages taken by Hamas.”

The laws of war mandate compliance “to the principles of distinction between combatants and civilians, proportionality, and caution. Relying on AI-generated intelligence for attack decisions, without the ability to explain and understand the decision-making process, undermines international humanitarian law obligations to which Israel is committed,” argued ACRI.

Human rights advocates demand IDF ensure AI compatible with law

ACRI has formally requested that the IDF Military Advocate General scrutinize the compatibility between reliance on the AI system and Israel’s obligations under both Israeli and international law.

Included in this request, ACRI has urged an urgent examination of the “HABSORAH” system and the “use of AI in making decisions about attack targets, emphasizing the need to ensure alignment with Israeli military obligations.”

ACRI explained that it wants to analyze how much the IDF can rely on the mix of AI with big data in terms of how it assesses  information as reliable and current, “and its potential biases that could lead to the erroneous identification of targets for attack, resulting in severe harm to civilians who are not taking a direct part in hostilities.”

The NGO asserted that “this review is especially crucial given the substantial increase in targets approved by the system compared to those previously approved by human intelligence, and in light of the high number of fatalities among non-combatant Gazan civilians, including women and children.”

Noa Sattath, ACRI’s executive director added, “ACRI remains committed to safeguarding human rights and promoting accountability, ensuring that the use of AI in military operations aligns with legal and ethical standards.”

The IDF declined to respond on Wednesday, but it or the Justice Ministry is expected to respond within a certain amount of weeks or months.

FOIA requests are often slow-walked by government and defense agencies, which frequently claim that the information requested is classified or would endanger national security.

If ACRI is unsatisfied with the IDF-government response, as it likely will be, the next likely move would be a petition to the High Court.