The IDF's Unit 8200 used artificial intelligence to eliminate a Hamas official and locate hostages in the Gaza Strip, three Israeli and US officials told The New York Times on Friday.
The New York Times reported that the military used AI tech to kill Ibrahim Biari, who was a Hamas commander based in northern Gaza. He assisted in planning the terrorist attacks in southern Israel on October 7, 2023. Four Israeli officials said AI technology was immediately cleared for deployment after the attacks, the report added.
The report said that finding Biari was difficult for the IDF in the first few weeks of the war. The technology used to eliminate him was developed a decade ago, but was only utilized when he was struck by the IDF, shortly after Unit 8200 engineers implemented AI into the tech used to locate and strike him, officials said.
The AI technology was able to locate Biari by listening to his calls. The audio tool was also used by Israeli intelligence to locate hostages taken by the terrorist organization. Two Israeli officers quoted in the report said that the AI tool was refined over time to find hostages.
The attack that killed Biari also killed 50 other terrorists, the IDF said in November 2023. This came after the Pentagon asked the military for "detail the thinking and process behind the strike," to avoid more Gazan civilian casualties, an official told Politico.
Regarding the AI technology, three people told The New York Times that many of these initiatives started as collaborations between Unit 8200 soldiers and IDF reservists who worked at tech companies such as Google and Microsoft. However, Google noted that "the work those employees do as reservists is not connected," to the company.
Israel also used AI technology to monitor the reactions from the Arab world to then-Hezbollah leader Hasan Nasrallah's death.
AI technology in warfare raises ethical concerns
The report cited three US and Israeli officials who said that these AI technologies have sometimes led to the deaths of civilians as a result of mistaken identification.
Hadas Lorber, head of the Holon Institute of Technology's Institute for Applied Research in Responsible AI, told the New York Times that the technology used "raises serious ethical questions." Lorber was also a former senior director at the Israeli National Security Council.
The report also quoted an IDF Spokeswoman who said that the military "is committed to the lawful and responsible use of data technology tools.”
Reports of IDF using AI last year
Further reports of the IDF using AI were covered by the Washington Post late last December, where the source said that the military used artificial intelligence to rapidly refill their "target bank," a list of Hamas and Hezbollah terrorists to be killed during military operations, along with details about their whereabouts and routines.
Like in the recent New York Times report, there were also ethical concerns about using the technology. The December report noted that there was a debate within the IDF's senior echelons about the quality of intelligence gathered by AI, and whether focusing on AI weakened the military's intel capabilities.
The military's Unit 8200, also known as its Military Intelligence Directorate, supplies the army and the state with any warnings and alerts to protect the country from terrorist threats.