Moving Toward an Artificial Intelligence Treaty

Julian McIntosh
Vol. 42 Associate Editor

Introduction Artificial Intelligence (AI) has proliferated at a breakneck pace, with the United States and China at the vanguard.[i] AI is often thought of in the context of massive supercomputers.[ii] However, advancement has grown so widely that AI is seeping down to the personal level.[iii] With any world-changing advancement, becoming a technology leader provides the leading country the opportunity to supercharge their economy and determine their path to prosperity.[iv] However, it also creates opportunities to weaponize innovation.[v] With such temptation at the fingertips of every country developing AI technology, it is paramount that a treaty is implemented to regulate further development.[vi] Why a Treaty is Necessary China and the United States are the two most powerful economies in the world and are competing for dominance in the AI space.[vii] This has created a perception of an AI arms race.[viii] This perception, however, undersells the long-term potential of AI and oversells the short term impact.[ix] “For the foreseeable future, AI will only incrementally improve existing platforms, unmanned systems such as drones, and battlefield awareness.”[x] Meanwhile, AI research will benefit from cross-silo collaboration. Focusing on a zero-sum game would limit the heights that Artificial Intelligence can reach.[xi] Though the United States surely wants to be dominant in this space, it is this understanding that has motivated a report calling for an AI treaty. In the report, the United States proffers:

[T]he United States should lead in the formulation and ratification of a global treaty on Artificial Intelligence in the vein of the Geneva Conventions, the Chemical Weapons Convention, and the Nuclear Non-Proliferation Treaty to establish guardrails and protections for the civilian and military use of AI.[xii]

In doing so, the United States believes the world should shift its focus to try to “establish accountability, promote collaboration and transparency, ensure fairness, and limit the harmful use of AI.”[xiii] Ultimately, the proposal is light on details, but the high-level idea has merit. A treaty will ensure that this vital technology, AI, is nurtured while simultaneously ensuring that everyday civilians are not unduly harmed. Although the United States lacks details, many others have considered what could be the foundation of a global AI treaty. What a Treaty Should Look Like The basis of a global AI treaty already exists.[xiv] Many of the most common principles, that would represent the foundation of any treaty, are generally accepted by 90% of global organizations working on AI. [xv] The central problem in a hypothetical agreement is that it lacks definition and detail.[xvi] This opens the door for abuse or pleading ignorance in the future.[xvii] But that the principles that would underpin a potential agreement already exist is promising. Especially when considering the use of force in conjunction with AI, alignment is critical. Concerning weaponry, a treaty should be broad in scope, should insist on human control of the AI – with a specific focus on what any developer must explicitly do and not do, –  articulate standards for transparency, note standards for collaboration, and include guidance on implementation.[xviii] That the necessary factors for an AI weapons treaty dovetail with more general AI regulation is promising. Consistently, the key tenants of regulating AI are: do no harm, all laws applicable to the human operator, AI must clearly disclose it is not human, AI must not share confidential information, and AI must not increase any bias that already exists in current systems.[xix] Distilled, this is no harm, responsibility, transparency, privacy, and bias. These pillars support an agreement that will turn the development of AI from a winner takes all arms race into a more inclusive and mutually beneficial process. Counterargument Alternatively, there are arguments that AI does not need to be and should not be regulated: precedence in regulating science, the lack of experience of regulators, the lack of true danger posed by AI, and the global competition that AI has created. Global competition is potentially the strongest of these arguments. Regulating artificial intelligence will be difficult because of the inability to set and enforce universal standards.[xx] A compelling, and pertinent, example of this is China’s reaction to International Law. [xxi] In 2016, an international tribunal announced that China was violating modern maritime law and would need to reform its activities.[xxii] Once the findings were announced, however, China publicly announced that it would be ignoring the ruling, stating “the People’s Republic of China solemnly declares that the award is null and void and has no binding force.”[xxiii] China’s response resulted from its aggressive stance toward the water off its southeastern border, a lucrative economic zone.[xxiv] This demonstrates that not only will it be difficult to enforce without a clear threat of force, but flouting the treaty potentially represents massive economic gains.[xxv] Finally, the idea of working with only good actors is unrealistic, and the potential competitive loss represents too large of a risk.[xxvi] If a treaty were ratified, and some countries abided it while others flouted it, those who followed the rules would be at a severe competitive disadvantage.[xxvii] Ultimately, any treaty would have to solve the problem of enforcement, especially considering the massive benefit of ignoring regulations. Artificial intelligence should be regulated, but the idealistic hope of treaty establishment and diplomatic good nature must be tempered with a vision to mitigate bad actors. Conclusion The speed of Artificial Intelligence proliferation demands a system to ensure the safe development and application of the technology.[xxviii] Fortunately, many of the foundational pillars of such a treaty are already readily agreed upon.[xxix] Though there is general agreement on the broad strokes, such a policy lacks detail.[xxx] The potentially massive opportunity in the Artificial Intelligence market leads to a winner take all view that misunderstands the current stage in the development life cycle and the long term prospects of artificial intelligence.[xxxi] A focus on collaboration will foster the greatest advancement and the largest potential market to share.[xxxii] Understanding and agreeing with at least the collaboration tenant of that view, the United States has called for the creation of an Artificial Intelligence treaty to ensure that the technology is used properly.[xxxiii] Though a regulatory structure appeals to many, some believe that Artificial Intelligence regulation is misguided and unnecessary. [xxxiv] Ultimately, this view is flawed and ignores the recent precedence of self-interested actors, even in smaller economic markets.[xxxv] Developing and implementing a treaty is of paramount importance for the continued successful application of Artificial intelligence research and technology rollout.

[i] Paul Mozur, Beijing Wants A.I. to Be Made in China by 2030, The New York Times (July 20, 2017), (last visited Oct 21, 2020). [ii] Anton Shilov, Nvidia will build the “world’s fastest AI supercomputer”, TechRadar (Oct. 15, 2020), (last visited Oct 21, 2020). [iii] Gary Sims, The $59 Jetson Nano 2GB is proof Nvidia is serious about AI for everyone, Android Authority (Oct. 5, 2020), (last visited Oct 21, 2020). [iv] Mozur, supra note 1. [v] The Need for and Elements of a New Treaty on Fully Autonomous Weapons, , Human Rights Watch (2020), (last visited Oct 21, 2020). [vi] House Armed Services Committee, Future of Defense Task Force Report 2020, (last visited Oct 21, 2020)[hereinafter Future of Defense of Task Force Report 2020]. [vii] Mozur, supra note 1. [viii] Tim Hwang & Alex Pascal, Artificial Intelligence Isn’t an Arms Race, Foreign Policy (Dec. 11, 2019), (last visited Oct 22, 2020). [ix] Id. [x] Id. [xi] Id. [xii] Future of Defense Task Force Report 2020, supra note 7. [xiii] Id. [xiv] Oren Etzioni & Nicole Decario, We have the basis for an international AI treaty, TheHill (July 17, 2019), (last visited Oct 21, 2020); Etzioni, supra note 4. [xv] Etzioni, supra note 15. [xvi] Id. [xvii] Id. [xviii] The Need for and Elements of a New Treaty on Fully Autonomous Weapons, supra note 6. [xix] Etzioni, supra note 15. [xx] Cameron Russell, A Case for not Regulating the Development of Artificial Intelligence, Medium (Apr. 1, 2019), (last visited Oct 24, 2020). [xxi] Steve Mollman & Heather Timmons, China has no respect for international law, its neighbors, or marine life, a tribunal rules, Quartz (June 12, 2016), (last visited Oct 24, 2020). [xxii] Id. [xxiii] Id. [xxiv] Id. [xxv] Artificial Intelligence (AI) Market Size, Growth, Trends and Global Segments Analysis Report, MarketWatch (Sep. 9, 2020), (last visited Oct 24, 2020). [xxvi] Timmons, supra note 22. [xxvii] Id. [xxviii] Etzioni, supra note 15. [xxix] Etzioni, supra note 15. [xxx] Id. [xxxi] Hwang & Pascal, supra note 9. [xxxii] Id. [xxxiii] Future of Defense Task Force Report 2020, supra note 7. [xxxiv] Russell, supra note 21. [xxxv] Timmons, supra note 22. The views expressed in this post represent the views of the post’s author only.