Offensive Autonomous Weapons: Should We Be Worried?
Christian Husby, Vol. 37 Associate Editor
Offensive autonomous weapons. Stephen Hawking, Elon Musk, and Noam Chomsky are opposed, Human Rights Watch is opposed, and 68% of Americans are opposed. So, should we be worried? Before being able to draw any conclusions, it would be useful to define what autonomous weapon systems (AWS) are. The most commonly cited definition is that articulated by the U.S. Department of Defense. The U.S. Department of Defense defines AWS as: “a weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.” The Center for New American Security’s Ethical Autonomy project defines an autonomous weapons system as a “weapon system that, once activated, is intended to select and engage targets where a human has not decided those specific targets are to be engaged.” Other commentators compile their own definitions. For example, one scholar describes a “fully autonomous system” as one that can decide “on its own what to report and where to go,” that “may be able to learn and adapt to new information,” and that, generally, “the more intelligent a system is, the more autonomous it may be.” However similar or different these definitions, the word “autonomous” is clearly referring to the level of autonomy such weapons have from human control. One can imagine that there could be varied levels of autonomy on a sliding scale from more to less human control, and indeed there is. Thus, definitions have arisen and terms assigned to weapons of varying levels of autonomy:
- Human-in-the-loop or semi-autonomous systems require a human to direct the system to select a target and attack it, such as Predator or Reaper UAVs.
- Human-on-the loop or human-supervised autonomous systems are weapon systems that select targets and attack them, albeit with human operator oversight; examples include Israel’s Iron Dome and the U.S. Navy’s Phalanx Close In Weapons System (or CIWS).
- Human-out-of-the-loop or fully autonomous weapon systems can attack without any human interaction.
The first two items are followed by examples because such weapons are indeed already in existence. The first, UAVs, also known as drones, are nearly ubiquitous in the consciousness of every American and the second item perhaps is lesser known. As for the third item, no such technology yet exists, and such “offensive autonomous weapons beyond meaningful human control” are the target of the recommended ban called for by Hawking, Musk, and a score of other scientists and academics. It should also be noted that autonomous weapons systems may not necessarily be embodied entities and could include computer viruses, worms and other malware. “‘[A]utonomous cyberweapons’ are essentially computer-based variants of DoD’s traditional definition of autonomous weaponry…” A combination of the third item, “human-out-of-the-loop,” and cyberweapon certainly might start to turn on some “worry” switches, but before that we should first look to see what types of weapons are actually currently in existence. To make one thing clear, “[r]obotic systems that are currently deployed all retain a ‘human in the loop,’ where a human operator can veto the decision of the machine.”In other words, there are currently no human-out-of-the-loop weapons. However, “[w]eapon systems with varying levels of autonomy and lethality have already been integrated into the armed forces of numerous states.” South Korea has developed, although it is not clear if it has been deployed, the Super aEgis II. This weapon can “find and lock on to a human-sized target in pitch darkness at a distance of up to 1.36 miles, uses anything from a 12.7 mm caliber machine gun to a surface-to-air missile to fire, and can be mounted on the ground or on a moving vehicle, [and it] may be set to modes where they can select and engage targets with no human involvement or oversight.” The U.S. has the Aegis Combat System, which gained some notoriety in the Vincennes accident that resulted in the downing of an Iranian civilian aircraft. This system can identify, target, and engage incoming threats, and normally, a human operator can veto any decision the system makes, but in “casualty” mode, it is capable of fully autonomous operation. Even more intriguing are the autonomous cyberweapons. Stuxnet was a computer worm that infected Iranian industrial sites and was able to cause damage to Iran’s uranium enrichment stations. In fact, the worm was perhaps too effective as, due to a programming bug, Stuxnet allegedly spread to computer in the United States, India, and Indonesia. Additionally, in 2007, Estonia’s electronic infrastructure was struck by a botnet that overloaded and rendered useless Tallinn’s banking sites and internal government servers. There are also aircraft that can autonomously take off, fly, and land, and as one scholar surmised, “While these systems are not yet designed to autonomously attack an enemy, it is not difficult to imagine how such technology could be adjusted to support fully autonomous targeting.” Some are calling autonomous weapons a “third revolution in warfare, after gunpowder and nuclear arms.” Perhaps unsurprisingly, a number of advocates are campaigning for an absolute ban on the research, development, and production on autonomous weapons. Ban advocates worry that autonomous weapons will make war too easy to wage because political leaders will have to worry less about putting their citizens’ lives at risk, while others point to difficulties of assigning accountability for war crimes committed by autonomous weapons systems. Much of the debate against autonomous weapons focuses on the inherent inability of autonomous opportunities to conform to the laws of armed conflict (LOAC). Specifically, ban advocates argue that autonomous weapons will not be able to adequately distinguish between lawful and unlawful targets or make proportionality assessments, which require weighing the importance of a military objective with the risk of likely collateral damage. Also, critics argues that use of autonomous weapons “may not accord with the Martens Clause of the First Additional Protocol to the 1949 Geneva Conventions-which, ban proponents argue, requires new technology to comply with “the principles of humanity” and “the dictates of the public conscience.” The other side of the fence argues that: first, it is too early and too speculative to call for an outright ban; and second, that not only are the LOAC enough to control the development of AWS, but AWS might actually better be able to conform to LOAC than humans. Human combatants usually violate the LOAC for reasons of “fear, anger, frustration, revenge, fatigue, stress, and self-preservation,” none of which would be experienced by AWS; and thus, AWS could result reduced “human casualties, collateral damage, and war crimes.” Although in some cases human emotion can also act as a humanitarian regulator, “it is equally true that [emotion] can unleash the basest of instincts. From Rwanda and the Balkans to Darfur and Afghanistan, history is replete with tragic examples of unchecked emotions leading to horrendous suffering.” Thus, proponents argue that “there are serious humanitarian risks to prohibition and a very real possibility this technology will be ‘ethically preferable to alternatives.’” Proponents of AWS not only highlight the emotional frailties of humans but also their physical limitations. “A Defense Advanced Research Projects Agency official has stated that human beings are becoming ‘the weakest link in defense systems.’” AWS does not “get hungry, tired, bored, or sick. They are immune to biological and chemical weapons. They tackle the dirty, dangerous, and dull work without complaint. They can reach inaccessible areas and survive in inhospitable environments.” Furthermore, the tempo and complexity of combat is ever-increasing and it may not be long before human operators cannot keep up; AWS may also create an environment too complex for humans to direct.” In light of this, proponents argue that the “technology . . . could potentially make our lives better; and to pass on the opportunity to develop AWS is irresponsible from a national security perspective.” On the other hand, perhaps well-placed worry regarding AWS is less about the current debate of the merits and demerits of current and currently developing technology, and more about what AWS could become. There is much skepticism about human-out-of-the-loop weapons because they currently do not exist, and could not exist “absent dramatic improvements in artificial intelligence.” However, it would be easy to see the danger of artificial intelligence (AI) being incorporated into cyberweapons to create something more dangerous, and perhaps less controllable (inadvertently or by design), than Stuxnet. Or, AI mixed with “swarm” technology, “where a large number of small UAVs operate in concert to perform designated missions,” would create a level of complexity that would make the system highly unpredictable. Even without knowing exactly what could be developed, there is much impetus for greater autonomy and greater complexity. U.S. military officials have commented that “[t]here’s really no way that a system that is remote controlled [such as a UAV] can effectively operate in an offensive or defensive air combat environment. The requirement of that is a fully autonomous system.” Moreover, one-upmanship may quickly push the development of fully autonomous weapons because “[a] force that does not employ fully autonomous weapon systems will inevitably operate outside the enemy’s ‘OODA [observe, orient, decide, and act] loop,’ thereby ceding initiative on the battlefield.” The importance of this technology is reflected in the U.S. Government’s intention, expressed in 2007, “to spend at least $24 billion on unmanned systems through 2013.” Worry may also relate to not only the AWS technology itself, but to any inadequacies in the law related to AWS. “There is no treaty specifically governing the use of unmanned systems or AWSs. However, like all other weapon systems, unmanned vehicles and AWSs are subject to the general principles of the LOAC.” The general principles of LOAC do not seem adequate to allay the worries and fears of the opposition to AWS, but it is a question whether or not some form of treaty could realistically address the concerns the opposition has. Two Judge Advocates compare the calls for prohibiting AWS to the calls to prohibit aerial bombardment. The Judge Advocates argue that the “tragic and sad tale” of the damage cause by aerial bombing in World War II was avoidable, “had the conversation focused not on prohibiting aerial bombardment, but rather on improving the technology of bombardment to prevent civilian casualties and bringing aerial bombardment into compliance with existing laws of armed conflict.” Other scholars agree that a ban is not the right question because some forms of AWS are already in widespread use, and that if the use of autonomous weapons means winning the war, then they will be used. Thus, it is argued that the better path is to “proactively channel the development of autonomous weapon systems” and set “forth definitions and rules regarding development, usage, or transfer.” This could involve a “[c]ollaboration between scholars and practitioners . . . to develop creative, yet pragmatic, answers to the difficult questions created by autonomous weapons.” Worry often stems from the unknown, low risk/large consequence events, and genuine threats. Perhaps it would be a useful exercise to attempt to ascertain what the origins of worries related to AWS are, and if a line can be drawn as to where the true threat beings or what constitutes unacceptable consequences despite low risk. Human-out-of-the-loop weapons integrating advanced AI do not currently exist. Perhaps now is an opportune time to think about if that constitutes a real threat or if it is something we would be okay with seeing become a reality. Comparing AWS to aerial bombardment is a useful analogy, but perhaps not the most apt one. Perhaps a better point of comparison is the bans on the development of biological and chemical weapons. More recently, “[t]he 1995 Protocol IV to the Convention on Certain Conventional Weapons prohibits state parties from employing or transferring ‘laser weapons specifically designed, as their sole combat function or as one of their combat functions, to cause permanent blindness to unenhanced vision.’” This prohibition, of which 104 state parties have signed on, is especially notable because it was a prospective ban (blinding lasers were in development but had not yet been deployed at the Protocol’s conclusion), and there have been no recorded violations. Permanently blinding lasers are worrisome, but arguably less so than artificially intelligent weapons systems.
 Autonomous Weapons: an Open Letter from AI & Robotics Researchers, Future of Life Institute, http://futureoflife.org/open-letter-autonomous-weapons/ (last visited Nov. 12, 2015) [hereinafter Open Letter].  Human Rights Watch, Losing Humanity: The Case Against Killer Robots 5 (2012).  Rebecca Crootof, The Killer Robots are Here: Legal and Policy Implications, 36 Cardozo Law Rev. 1837, 1880 (2014).  Id. at 1847.  Gregory P. Noone & Diana C. Noone, The Debate Over Autonomous Weapons Systems, 47 Case W. Res. J. Int’l L. 25, 27 (2015).  Crootof, supra note 3, at 1849.  Benjamin Kastan, Autonomous Weapons Systems: A Coming Legal “Singularity”?, U. Ill. J.L. Tech. & Pol’y 45, 50 (2013).  Noone & Noone, supra note 5, at 28.  Id.  Open Letter, supra note 1.  Crootof, supra note 3, at 1854.  Christopher M. Kovach, Beyond Skynet: Reconciling Increased Autonomy in Computer-based Weapons Systems with the Laws of War, 71 A.F. L. Rev. 231, 233 (2014).  Kastan, supra note 7, at 50.  See, e.g., Noone & Noone, supra note 5, at 28.  Crootof, supra note 3, at 1840.  Id. at 1869.  Id. (internal citations omitted).  Kastan, supra note 7, at 50.  Id.  Kovach, supra note 12, at 234.  Id. at 235.  Id. at 248-49.  Michael N. Schmitt & Jeffrey S. Thurner, “Out of the Loop”: Autonomous Weapons Systems and the Law of Armed Conflict, 4 Harv. Nat’l Sec. J. 231, 239 (2012).  E.g., Open Letter, supra note 1.  Shane R. Reeves & William J. Johnson, Autonomous Weapons: Are You Sure? Can We Talk About It?, 2014 Army Law. 25, 25 (2014).  E.g., Crootof, supra note 3, at 1866, 1872.  Id. at 1842.  Id.  Noone & Noone, supra note 5, at 26.  Id. at 29-30.  Schmitt & Thurner, supra note 23, at 249.  Reeves & Johnson, supra note 25, at 26.  Crootof, supra note 3, at 1867-68.  Id. at 1868.  Schmitt & Thurner, supra note 23, at 238.  Noone & Noone, supra note 5, at 26.  Schmitt & Thurner, supra note 23, at 238.  Kastan, supra note 7, at 53.  Jack M. Beard, Autonomous Weapons and Human Responsibilities, 45 Geo. J. Int’l L. 617, 633 (2013).  Schmitt & Thurner, supra note 23, at 238.  Kastan, supra note 7, at 52.  Id. at 54.  Reeves & Johnson, supra note 25, at 27.  Id. at 29.  Crootof, supra note 3, at 1903.  Reeves & Johnson, supra note 25, at 30.  Crootof, supra note 3, at 1896, 1903.  Reeves & Johnson, supra note 25, at 30.  Crootof, supra note 3, at 1915.  Id.