Unmanned aerial vehicles (UAV – aka ‘drones’) are increasingly used in conflict zones (and elsewhere) for a variety of tasks, ranging from surveillance to targeted killings. To those who question the morality of using unmanned aircrafts piloted with an iPhone from an office in Virginia to kill human beings, it is retorted that remotely-piloted aircrafts allow for more considered and less emotion-driven decision-making. This ‘laudable’ objective is on the verge of being successfully attained: according to a 2009 U.S. Air Force report, by 2047 drones will be fully automated.
Technologies allowing automatic target engagement are tactfully described in the report as ‘a revolution in the roles of humans in air warfare’. Assuredly, this will be a revolution not only on the battlefield, but also in many offices where lawyers will have to deal with the consequences of automated warfare.
Fortunately, these future drones will have little resemblance with cold-blooded science-fiction killer robots. The report reassuringly states that
“commanders must retain the ability to refine the level of autonomy the systems will be granted by mission type, and in some cases by mission phase, just as they set rules of engagement for the personnel under their command today. The trust required for increased autonomy of systems will be developed incrementally. The systems’ programming will be based on human intent, with humans monitoring the execution of operations and retaining the ability to override the system or change the level of autonomy instantaneously during the mission.”
This raises questions regarding the shift of responsibility triggered by automated warfare. It also casts doubts as to who should be held accountable for possible serious violations of the laws of war.
Decisions are usually taken by responsible moral agents capable of rational judgment. The notions of moral agency and responsibility, however, are difficult to reconcile with algorithms. The human agent monitoring the execution of an operation does not take the decision to engage a target. Should he be held accountable for having failed to intervene on time to prevent undesirable killings? Should commanders be responsible for having wrongly defined the parameters and rules of engagement? Or should the programmers be held to account for having instilled deficient assessment mechanisms into the machine?
Culpable omissions and lack of due diligence are certainly reprehensible – many mechanisms of responsibility in international criminal law rest on those premises. Yet omissions are generally considered less culpable than actions. What is more, in all cases involving vicarious liability there is a primary offender – someone who pulled the trigger and may be held to account. When that ‘person’ is a machine, a crucial link is missing in the mental representation of responsibility. In the absence of a primary offender, the narrative of death is conceptually altered.
Technological advances in unmanned warfare displace the burden of decision-making and contribute to outsourcing and distorting responsibility. This is a gloomy future. Perhaps it is necessary to modify or strengthen extant mechanisms of responsibility to ensure that operations involving automated drones will be closely and carefully monitored. In particular, it seems crucial that individuals having the possibility to intervene directly during the mission should bear primary responsibility for serious violations of the laws of war.