A Technological Threat to Human Rights: The Case for United Nations Artificial Intelligence Regulation
Vol. 43 Associate Editor
The lack of regulation concerning artificial intelligence (“AI”) poses a risk to international peace and security, requiring the United Nations (“U.N.”) to step in and provide a framework for oversight. Recent proposals of regulation by the European Union (“EU”) provide a structure for this regulation and should be adopted on an international scale to protect fundamental human rights. The U.N. should prioritize adopting regulation of its own suborganizations’ use of AI and should further explore passing a binding resolution or multilateral treaty to regulate state AI use. 1. Human Rights Problems Posed by AI U.N. High Commissioner for Human Rights Michelle Bachelet publicly acknowledged the threat AI poses to human rights this past September, recognizing that, while it can be a force for good, “AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights.” The statement coincided with the publication of a report by the Office of the U.N. High Commissioner for Human Rights analyzing AI’s effect on human rights. Findings of failures of due diligence, discriminatory data, increasing use of largely unregulated biometric technologies, and lack of transparency led Bachelet to call for a complete moratorium on the sale and use of AI systems until regulatory systems have been put into place. The U.N. is increasingly using AI technologies and has several research and development labs focused on developing AI solutions to further the U.N.’s mission in several areas, such as the organization’s ability to anticipate and respond to humanitarian crises. These uses have themselves sparked questions of AI’s threat to human rights, seen in U.N. agencies’ use of biometric identification in managing humanitarian logistics and refugee claims. For example, in 2019, the United Nations World Food Program (“WFP”) paused its delivery of food in Yemen’s capital after Houthi rebels did not allow for the registration of recipients’ details. The WFP required biometric registration to prevent the diversion of aid by the Houthi-aligned authorities, however, the Houthis argued that the biometrics program posed a threat to national security. While aid agencies argued that the system could make picking up assistance easier and more efficient, privacy advocates expressed concern over the potential danger the information posed to vulnerable people. The current lack of regulation means a gap in more formal oversight, with the current state of enforcement largely being simple peer recommendations and “best practices.” A new kind of legal backing is necessary to regulate international agencies like the U.N. itself as it enters into a partnership with Palantir (a CIA-linked American software firm that has been criticized by human rights watchdogs for potential human rights violations) for analysis of WFP’s data. Some see aid agencies as being naïve when entering into partnerships with corporations without a full understanding of the implications of doing so. The worry with programs like these is largely centered around who holds the power to decide how data is shared, with whom it is shared, and how it is used. Those in vulnerable positions, like aid recipients, may not understand the consequences of providing (or refusing to provide) the data and have little control over what is done with it. Such was the case with the European Commission’s database of asylum seekers, which was initially protected but then opened to Europol and other law enforcement agencies. Concerns have also been raised regarding Ghana’s electoral commission selling biometric data to a software developer who in turn sold the data to financial service providers, as well as China’s rollout of facial recognition technology in the region of Xinjiang, home to many of the country’s minority Uyghurs.  The human rights issues posed by artificial intelligence are too great to be ignored. The United Nations needs to move quickly to address these threats before they materialize in more concrete and harmful ways. Binding regulation is necessary to create the formalized oversight required to protect the public, encouraging innovation and growth while still prioritizing fundamental human rights. 2. The EU as a Model for Global Regulation The U.N.’s report and its troubling findings come as the EU became the first governmental body to draft a comprehensive response to the development and use of AI earlier this year. The proposal outlines regulations that divide AI systems into three categories of risk, each with their own requirements. The regulations apply to individuals and companies located within the EU, placing AI systems on the market in the EU, or using an AI system within the EU. The proposal could enter into force in a transitional period in 2022, with the earliest expected date for full enforcement in the second half of 2024. The proposal establishes a list of prohibited practices that present a clear threat to fundamental human rights and safety and are therefore considered unacceptable. These systems include those that manipulate the public in ways that could cause physical or psychological harm, those that exploit specific vulnerable groups (such as children or those with disabilities), and those that use real-time biometric identification systems in public places for purposes of law enforcement. These unacceptable-risk systems are, under this proposal, not permitted in the EU. High-risk AI systems are subject to oversight, transparency, monitoring, reporting, cybersecurity, risk management, and data quality obligations. These systems include safety components of products; essential private and public services; migration, asylum, and border control management; and more that could put the life and health of citizens at risk. All remote biometric identification systems are considered high risk. Limited-risk AI systems are those such as chatbots and are subject to transparency obligations under the proposal while minimal-risk systems (AI-enabled video games, spam filters) are allowed free use. The regulations impose penalties that could include fines of up to €30 million euro or six percent of global revenue. The proposal would expand regulations introduced in the EU General Data Protection Regulation (“GDPR”), which imposes regulations on profiling and automated decision-making that mandate transparency, fairness, and the right to challenge the automated decision. In contrast to GDPR, the proposed regulation expands the extraterrestrial scope of those subject to the requirements, outright prohibits certain AI systems, and imposes more specific requirements for high-risk AI systems. For example, while GDPR requires fairness in the processing of personal data and grants individuals the right to human intervention in the company making the automated decision, the proposed regulation imposes specific requirements including training, validation, and testing data to avoid bias and discrimination, requirements that apply not just to the company but also the vendor providing the system. The European Commission’s proposal is groundbreaking and paves the way for the kind of regulation for which Bachelet is advocating. It highlights the lack of international regulation of a fast-moving industry that poses a distinct threat to human rights. The U.N. should issue a resolution bringing this model of regulation to the international stage. 3. Proposed Regulation At the very least, what is necessary is U.N. regulation of its own bodies. Programs like that used by WFP pose a threat to human rights without proper oversight. A United Nations Regulation for Artificial Intelligence modeled after the EU’s proposed legislation would ensure the continued development of artificial intelligence that furthers the organization’s mission while allowing for supervision and consideration of the kinds of technologies deemed “high-risk.” The adoption of the EU proposal on an international scale would allow for formal oversight of controversial use of AI systems such as the biometric systems put into use by the WFP, likely falling under the high-risk category and its obligations of transparency, cybersecurity, and risk management. Such regulation should be prioritized in line with Bachelet’s recommendation and cover all U.N. suborganizations. Further, the U.N. should explore the possibility of state regulation, either through resolution or through multilateral treaty. It can be argued that this issue falls within the United Nations Security Council’s (“UNSC”) domain and could therefore be regulated under a binding resolution passed by the UNSC, going beyond the peer recommendations already in place and holding real weight in the international sphere. AI poses a clear threat to international security and demands a proportionate U.N. response. A multilateral treaty could also be an attractive option for regulation, with the added benefit of states opting in as opposed to the regulation coming from the top down. The AI threat to human rights aligns with other human rights-focused multilateral treaties issues under the U.N., presenting a united front in combatting an increasingly critical concern. This might, however, present challenges in convincing states prioritizing AI to offer themselves up to third-party regulation and potential penalties. The human rights issues posed by artificial intelligence are too great to be ignored. The United Nations needs to move quickly to address these threats before they materialize in more concrete, and harmful ways. Regulation – binding regulation – is necessary to create the formalized oversight necessary to protect the public, encouraging innovation and growth while still prioritizing fundamental human rights.
 United Nations, Urgent Action Needed over Artificial Intelligence Risks to Human Rights, U.N. News (Sept. 15, 2021), https://news.un.org/en/story/2021/09/1099972.  See U.N. High Comm’r for Hum. Rts., The Right to Privacy in the Digital Age, U.N. Doc. A/HRC/48/31 (Sept. 13, 2021).  United Nations, supra note 1.  Eleanore Fournier-Tombs, The United Nations Needs to Start Regulating the ‘Wild West’ of Artificial Intelligence, The Conversation (May 31, 2021, 12:41 PM), https://theconversation.com/the-united-nations-needs-to-start-regulating-the-wild-west-of-artificial-intelligence-161257.  Id.; Ben Parker, New UN Deal with Data Mining Firm Palantir Raises Protection Concerns, The New Humanitarian (Feb. 5, 2019), https://www.thenewhumanitarian.org/news/2019/02/05/un-palantir-deal-data-mining-protection-concerns-wfp.  Head to Head: Biometrics and Aid, The New Humanitarian (July 17, 2019), https://www.thenewhumanitarian.org/opinion/2019/07/17/head-head-biometrics-and-aid.  Id.; Jamey Keaten & Matt O’Brien, UN Urges Moratorium on Use of AI that Imperils Human Rights, AP News (Sept. 15, 2021), https://apnews.com/article/technology-business-laws-united-nations-artificial-intelligence-efafd7b1a5bf47afb1376e198842e69d.  Misha Benjamin, Kevin Buehler, Rachel Dooley & Peter Zipparo, What the Draft European AI Regulations Mean for Business, McKinsey & Company (Aug. 10, 2021), https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/what-the-draft-european-union-ai-regulations-mean-for-business.  European Commission, Regulatory Framework Proposal on Artificial Intelligence, European Commission, https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai (last modified Oct. 14, 2021).  Commission Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM (2021) 206 final (Apr. 21, 2021).  European Commission, supra note 9.  Benjamin, supra note 8.  Marijn Storm & Alex van der Wolk, Privacy and the EU’s Regulation on AI: What’s New and What’s Not?, Morrison Foerster (Apr. 22, 2021), https://www.mofo.com/resources/insights/210422-privacy-eu-regulation-ai.html.  Eleonore Fournier-Tombs, Towards a United Nations Internal Regulation for Artificial Intelligence, 8 Big Data & Society (Aug. 30, 2021), https://doi.org/10.1177%2F20539517211039493.