Robots instead of soldiers on the battlefield: Sounds futuristic, but i's the reality
Autonomous weapons raise moral, legal, and ethical concerns, but in Israel, debate over limits is nearly absent. These systems act without human contact - and without hesitation.
Over the past year and a half, we've seen tens of thousands of drones integrated into combat operations in Gaza, Lebanon, and practically all fronts. The same is true for the Russia-Ukraine war, where hundreds of thousands of UAVs and kamikaze drones have already been deployed. The line that seemingly has not yet been crossed is that of a fully autonomous weapon that independently selects a target and decides whether to carry out a lethal strike.
Do we want to reach a point where a robot decides to carry out a deadly action while the human is removed from the loop? Imagine the scenario - a lethal autonomous drone hovers over a combat zone and waits. It detects suspicious movement, calculates a trajectory, compares the input to its rules of engagement, analyzes the data — and “pulls the trigger.” There’s no human operator behind a screen, no one to approve the strike, just an algorithm and pre-programmed instructions - a line of code deciding life or death. It sounds like a futuristic script, but this is the reality beginning to seep into the battlefield.
Many automatic systems are already in use. The difference between an automatic and an autonomous system is the choice - the decision - made by the autonomous system to carry out a lethal action. An automatic system merely reacts to a breach or a touch on a fence, while an autonomous system has a “judgment” element — that of a machine based on artificial intelligence (AI), which, as we all know, does not always err on the side of accuracy.
Lethal autonomous weapons are not a distant dream. These are tools - like drones or ground robots — that can identify a target, assess whether it is an "enemy," and shoot. Completely on their own. Without approval. Without hesitation. If this sounds like a red line to you - you’re not alone.
Around the world, there are several organizations opposing the use of lethal autonomous weapons. For example, the non-governmental organization Stop Killer Robots warns against the slippery slope of removing “human judgment” from the equation. To them, this is a dangerous threshold - immoral, illegal, and one whose end is unpredictable. In Israel, public discourse on the issue is nearly nonexistent, despite the country being one of the global leaders in the development of advanced military technologies, including lethal autonomous systems.
It’s clear to all that the technology already exists. Defense industries around the world are developing lethal autonomous tools with varying levels of autonomy and differing degrees of human involvement. Currently, the global assumption - supported by the UN committee on the issue (CCW – Certain Conventional Weapons) - is that even when using lethal autonomous weapons, there should be significant human involvement in the operational loop. However, there is no consensus on what "significant" actually means, and in the absence of agreement, each country sets its own limits.
Does one person supervising five lethal autonomous drones qualify as significant human involvement? What about overseeing 100 autonomous drones? And when it comes to defense systems intercepting hundreds of missiles — will human involvement remain meaningful there? Probably not. Even defensive systems can cause severe collateral damage, as has already happened globally, and as we saw just days ago when a Russian missile, launched in response to a Ukrainian UAV attack, passed dangerously close to a civilian passenger plane and nearly hit it.
Who Bears Responsibility?
Who is accountable in the event of a disaster? The weapon developers? Even though a long time may have passed since they built the system and they’re not connected to the specific mission? Perhaps the operator involved only in part of the decision chain? It’s doubtful that society would want to place the burden of responsibility on a young soldier - or that such a move could be justified.
Will all these questions be answered before these systems hit the battlefield? The unfortunate answer is likely no. Will these questions delay the deployment of autonomous systems in combat? Again, the answer is no - due to the overwhelming advantages of such systems: Saving human lives on both sides, improved precision, full compliance with rules of engagement, immunity to fatigue, pressure, cold, heat, fear, or emotion - all these ensure that commanders will integrate autonomous systems into the battlefield as soon as they’re available.
The vision of robots replacing soldiers - even to the point of pulling the trigger - is near, if not already here. However, as with other advanced technologies, developments on the ground are outpacing the ethical, moral, legal, and professional considerations that should accompany them. When it comes to a technology whose purpose is to kill - the consequences can be vast and far-reaching.
What Needs to Be Done? Plenty.
The discussion must expand. The government needs to support appropriate legislation and guidance. Defense industries must develop according to internationally established principles that ensure safer and more responsible deployment of lethal autonomous systems - systems that do not place the burden of responsibility on the soldier at the end of the production and operational chain.