Every technological innovation is accompanied by the good and the bad.
Technology has revolutionised the way humans live by continually fulfilling their previously unmet needs. One example of this is the advent of Artificial Intelligence (“AI”), which is capable of performing human functions and even replacing them in the distant future. One such human function that AI has the potential to disrupt is warfare. In the foreseeable future, AI may give birth to unparalleled and innovative lethal autonomous weapons which can be used as an army for the purpose of war or defense.
In recent years, a new AI arms race has begun between technologically advanced nations. For example, America has developed an autonomous defense submarine capable of attacking and collecting data. Meanwhile, Russia has announced the establishment of a new “technopolis” complex called “Era”, specifically dedicated for the development of technology relating to artificial intelligence for defense purposes. With these advances, modern warfare is moving a step closer to resemble the “Transformers” movies.
However, the rise in AI weapons systems raises important questions for International Law. After all, the use of weapons in an armed conflict is guided by the principles of International Humanitarian Law (“IHL”). IHL forbids the indiscriminate killing of people who are not taking part in hostilities. This necessitates the Principle of Distinction, which becomes applicable to every methods and means of attack in an armed conflict or war. The addition of AI to the methods and means of warfare will require paradigm changes in law to bring autonomous weapon under the purview of governance. The rise in AI -Enabled weapons and equipment will necessitate a new regulatory regime for humanitarian law.
Principle of Distinction
The Principle of Distinction is a cardinal principle of IHL which mandates belligerents to distinguish between objects as acceptable ‘military’ targets or as invalid ‘civilian’ targets that must be protected from intentional harm. It is therefore held necessary in battlefield to identify the locked target distinctively and direct the attack towards military object only.
The introduction of AI will lead to full-fledged automatization of the battlefield in the foreseeable future. Thereupon, it will lead us to “bloodless fights” due to the exclusion of humans from the battlefield with combat predominantly conducted between AI guided machines.
While developing AI-Enabled weapons, it will be necessary to incorporate in them algorithms that effectuate and adhere to the principle of distinction. Minute parameters, like targeting people dressed in a particular uniform or targeting people carrying weapons or, turning hostile and using weapons from either sides will be taken into consideration to draw the distinction. But as earlier observed with the prevalence of AIs, both sides on the battlefield will be comprised mostly of machines. Hence, the identification of targets will be a less perplexing task. With advances in accuracy, machine learning and computing speed, war machines will considerably raise the benchmark of armed conflicts and thereby come to replace humans in most instances.
It is important to take into consideration that technology, when accepted, will not only change the methods and means of war. The application of AI will be very familiar and closely rooted in civilian’s everyday activities too. The present dependence of humans on AI in forms like Siri, Tesla automated vehicles, distribution centre robots etc. will grow exponentially with the passage of time. Therefore, any hindrance in the working of non-combatant AI will have a detrimental effect on everyday civilian life.
There are many positives to an increased role for AI in warfare. As the targeting capabilities of AI increase, so will AI’s ability to precisely verify targets as legitimate or illegitimate.  Intelligent technology can be trained to lower the number of war crimes against people such as killing civilians or prisoners of war, the destruction of civilian property, the torturing of hostages, pillaging etc, while also minimising the collateral damage due to situations of confusion and confrontation. AI will be more efficient than humans and contemporary methods and means of warfare. Errors will also be downsized significantly on the battlefield. It will now be possible to accurately calculate and perceive the level of destruction caused by an attack hence leading to lesser violations on the ground of indiscriminate attack. The supplemental violations of human rights in the forms of rape, torture etc. will be eliminated in the due course of time as AI becomes the dominant means or method of warfare.
On the other hand, the proliferation of AI also presents many challenges to preserving international law, especially through the corruption of their instructions through various cyber tools. The 2008 cyber-attack on the Pentagon, U.S. Department of Defense more broadly, is a classic situation of breach of intelligence faced in this era. The use of cyber warfare to misdirect autonomous weapons can lead to grave destruction while creating significant problems of proof for laying state responsibility on any intervening state. The simplest problem which can be encountered is – the exploitation of codes can be executed by individuals who reside in, are nationals of, or are employed by many regimes of the world. Also, we might even encounter a situation, where the coder of an infecting virus is an AI itself, which will further complicate the situation. The entailing of responsibility becomes uncertain and problematic here.
The AI’s automatization as a technology which potentially grows by itself, will, in sometime, be unfeasible to control by humans. It is argued by some scholars that such a technology, possessing a higher IQ will rule humanity, as it is an inferior race. Though such development will take time of its own for technology to delve from narrow AI to super intelligent AI. Yet, even if they don’t reach that intelligence level, the checks and balances can easily be manipulated by man and machine.
Modification Of the Law
The upcoming era will require advanced and intricate battlefields developed to provide space for combat between AIs. Unprecedented rules for the battlefield will be indispensable and will need to be formulated for governing war on sea, land and air by AI.
The definition of civilian object and military object will need to be remodelled to render them effective with the employment of this new means of warfare. Civilian objects will now include the new technologies used for the civilian purpose such as advanced versions of modern autonomous machines like Sophia, TUG, Roomba etc. This inclusion will exempt these technologies from being eligible targets in war.
Military objects will now also encompass the companies manufacturing the weaponised AIs, training these types of AIs, professionals held to be in charge and controller of operation of any of the weaponised AI. The new rules will extend special protection to the potential targets with dual purposes despite the military advantage held in them, due to their nature, location, purpose or use. They prima facie would qualify as civilians, who would be discharging their duties as mere facilitators, scientists, managers of companies/departments etc. A comprehensive definition of military objects as well as civilian will be needed, marking the boundary between the objects which can be the target of attack and which cannot. Consequently, the clarity in definition will lead to precision in targeting the military objects or combatants.
Notwithstanding all the precautions taken, the standard rules fed in the systems of AI are prone to be corrupted by cyber-attacks or even get modified by developments in algorithms and by self-learning by these machines. Such development will lead to a situation of confusion and chaos in controlling AIs. Thus, a series of modifications are necessary to the Principle of Distinction in order to help it adapt to AI driven warfare. Even after the employment of AIs, the technology will continue to grow, bearing further seeds for the alteration of principles of IHL.
While the development of AI-Enabled weapons has the potential to save thousands of human lives, we cannot rule out the possibility of these intelligent weapons becoming corrupted or going out of control.
The use of AIs in a war create the potential for a soldier who is devoid of bias and hatred.  The situation is not farsighted when AI will become the face of military. The usage of machines will grow rapidly and possess power to transform or destroy our society. However, it is difficult to conceive how much value, intelligent machines will give to the safety of human lives as its intellect increases and it becomes more autonomous.
Humanity’s never ending desire for power has to come to a standstill someplace. The employment of new technologies has to be curtailed somewhere in order to ensure justice, peace and maintain world order. Otherwise, the bloodless revolution envisioned by creating more AI may turn catastrophic by the wills of the automated bots for humans. The bloodless in this revolution is surely not going to be long lived till we guide our paths via laws.
 Talal Husseini, Autonomous underwater robots: from Swordfish to the Orca, Naval Technology (May 2, 2019), https://www.naval-technology.com/features/autonomous-underwater-robots-navy/.
 Lazarcheev Kandrat, MOD’s innovation technolpolis will appear in Anapa (Feb 22, 2018, 01:06 PM), https://defence.ru/article/innovacionnii-tekhnopolis-minoboroni-rf-poyavitsya-v-anape/.
 International Committee of Red Cross, The Law of Armed Conflict : Weapons 2 (2002).
 International Committee of Red Cross, Practice Relating to Rule 11: Indiscriminate Attacks, customary IHL Database.
 International Committee of Red Cross, Rule 7 : The Principle of Distinction between Civilian Objects and Military Objectives, Customary IHL Database; Protocol Additional to the Geneva Convention of 12 August 1949, and relating to the Protection of Victims of International Armed Conflict, art. 51(4), art. 85(3)(b), art. 57 (2 ) (a) (iii), June 8, 1977, 1125 U.N.T.S. 3.
 Id art. 51 (2), art. 48, art. 52 (2).
 International Committee of Red Cross, Rule 7: The Principle of Distinction between Civilian Objects and Military Objectives, Customary IHL Database.
 Michael E. O’Hanlon, The role of AI in future warfare, Brookings, (Nov. 29, 2018).
 Vincent Boulanin & Maaike Verbruggen, Mapping the Development of Autonomy in Weapon Systems, Stockholm International Peace Research Institute (2017).
 Janna Anderson et.al., Artificial Intelligence and the Future of Humans 11, Pew Research Center (Dec., 2018).
 Peter Dockrill, Elon Musk Says The Future of Humanity Depends on Us Merging With Machine, Science Alert (Feb. 15, 2017), https://www.sciencealert.com/elon-musk-says-the-future-of-humanity-depends-on-us-merging-with-machines.
 Steve Banker, Distinctive Warehouse Robotics Solutions Are Emerging, Forbes (Feb 6, 2018, 03:22 PM), https://www.forbes.com/sites/stevebanker/2018/02/06/distinctive-warehouse-robotics-solutions-are-emerging/#3e70784121c7.
 Maria de Kleijn, Using AI to map…AI?, Elsevier (Dec. 11, 2018), https://www.elsevier.com/connect/using-ai-to-map-ai.
 International Committee of Red Cross, Autonomy, artificial intelligence and robotics: Technical aspects of human control 6, ICRC (2019).
 Kelly Cass, Autonomous Weapons and Accountability: Seeking Solutions in the Law of War, Loyola of Los Angeles Law Review (Jan. 4, 2015).
 Secret US military computers ‘cyber attacked’ in 2008, BBC News (Aug. 25, 2010), https://www.bbc.com/news/world-us-canada-11088658.
 Tom Simonite, AI Software Learns to Make AI Software, MIT Technology Review (Jan 18, 2017), https://www.technologyreview.com/s/603381/ai-software-learns-to-make-ai-software/.
 Robin Hanson, Human Legacies When Robots Rule the Earth, MIT Technology Review (Oct. 3, 2017), https://www.technologyreview.com/s/609042/human-legacies-when-robots-rule-the-earth/.
 Jessica Cussins Newman, Toward AI Security Global Aspirations for a more Resilient Future, (Center for Long-Term Cybersecurity, CLTC White Paper Series, Feb. 2019) https://cltc.berkeley.edu/wp-content/uploads/2019/02/CLTC_Cussins_Toward_AI_Security.pdf.
 Surabhi Agarwal, Robots will have full consciousness in five years, says Sophia creator, Economic Times (Feb. 20, 2018, 01:14 PM), //economictimes.indiatimes.com/articleshow/62995077.cms?from=mdr&utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst.
 Matt Simon, Tug, the Busy Little Robot Nurse, Will See You Now, Wired (Oct. 10, 2017, 08:00 AM), https://www.wired.com/story/tug-the-busy-little-robot-nurse-will-see-you-now/.
 Samuel Gibbs, iRobot Roomba 880 review: a robotic vacuum cleaner that’s almost a pet, The Guardian (Feb 24, 2015, 10:34 AM), https://www.theguardian.com/technology/2015/feb/24/irobot-roomba-880-review-robotic-vacuum-cleaner-thats-almost-a-pet.
 Sasha Radin, Expert views on the frontiers of artificial intelligence and conflict, Humanitarian Law and Policy (Mar 19, 2019).
 Stephen Hawking: AI will ‘transform or destroy’ society, CNBC (Oct. 20, 2016), https://www.cnbc.com/video/2016/10/20/stephen-hawking-ai-will-transform-or-destroy-society.html.
 OECD, Artificial Intelligence in Society, OECD Publishing, Paris (2019), https://doi.org/10.1787/eedfee77-en.
The views expressed in this post represent the views of the post’s author only.