The International Community’s Need for Human Oversight in Artificial Intelligence

Introduction

Artificial intelligence systems (“AI”) use machine learning, machine reasoning, and robotics to perceive their environment through data acquisition and derive the best action to achieve a given goal.[1] Human oversight of AI aims to prevent unintentional harms AI can produce, like from mistaken or biased predictions.[2]

The international community must create a treaty establishing a system of human oversight of AI, following the European Union and United States’s lead. Although human oversight of AI is a vast topic, this blog shall be limited to what AI requires human oversight, when it is appropriate, who should be responsible for it, and the scope of the oversight.

Measures in the European Union and the United States

The European Union and United States have already taken measures to ensure there is human oversight of AI. In the European Union, this is apparent in the proposed AI Act (“AIA”), the Ethics Guidelines for Trustworthy AI (“Guidelines”), and the Assessment List for Trustworthy AI (“ALTAI”).[3] In the United States, AI’s requirement for human oversight can be viewed in the creation of the Chief Digital and AI Office (“CDAO”), the AI Bill of Rights blueprint (“Blueprint”), and the Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI (“Executive Order”).[4]

The European Commission proposed the AIA,[5] and most indications signal a Regulation will be adopted, including a requirement for human oversight.[6] As drafted, Article 14 of the AIA requires AI be designed and developed so they can be “effectively overseen by natural persons”.[7]

Further, the Guidelines is a document prepared by the High-Level Expert Group on AI (“HLEG”).[8] This document highlights seven non-exhaustive, key requirements for AI to be trustworthy, including human agency and oversight.[9] The Guidelines detail that human oversight can be achieved through governance mechanisms in three ways: human-in-the-loop (“HITL”), human-on-the-loop (“HOTL”), or human-in-command (“HIC”).[10] Depending on which approach is undertaken, when oversight is required and the scope of such will vary.[11] Finally, the ALTAI is designed for self-assessment.[12]

In the United States, the CDAO is designed to ensure the execution of the policy of requiring human oversight of AI.[13] The Blueprint was released by the Biden Administration which includes “principles that should guide the design, use, and deployment of automated systems”.[14] Recently, President Biden released an executive order that establishes new standards for AI safety and security, among other things.[15]

What the Requirement for Human Oversight of AI May Look Like in an International Treaty

What AI Requires Human Oversight?

Not all AI should be subject to human oversight. Rather, this ought to depend on the sector it operates in. Although the European Union and United States propose different sectors, there is consensus on employment,[16] education,[17] and law enforcement/criminal justice,[18] and thus, these should be included in a treaty. When an AI system falls within these three, there ought to be human oversight.

The AIA proposes to require human oversight for (a) AI that are products or part of products covered by EU legislation listed in Annex II, or (b) are considered “high-risk” AI because they fall within a specific sector listed in Annex III.[19]

On the other hand, the United States places much emphasis on the healthcare sector. Both the Executive Order and Blueprint propose human oversight to be required in AI in healthcare sectors.[20] The Blueprint, however, goes further and proposes extensive human oversight in “sensitive domains” that require extra protections, such as in the criminal justice system, employment, education, and healthcare, among others.[21]

When Should Human Oversight be Deployed in AI?

The European Union does not provide a clear answer as to when it is to be required but suggests that it ought to be so throughout the entire operation. The AIA requires that high-risk AI be designed and developed so that they can be effectively overseen by natural persons during the period the system is in use.[22] Similarly, the HLEG’s Ethical Guidelines also require the seven requirements, including human oversight, to be evaluated and addressed continuously throughout the AI system’s life cycle.[23] Specifically, HITL systems require oversight in every decision cycle, HOTL systems require oversight during the design cycle and during operations, and HIC systems require oversight in the overall activity.[24]

The United States’s Executive Order requires post-market oversight of AI-enabled healthcare-technology algorithmic systems.[25] Although the United States does not specify when post-market oversight is to be conducted and there is no clear answer as to when oversight is to be required, requiring human oversight at all stages of an AI system’s operation will prove safe and eliminate the more burdensome risks of no oversight.

Who Should be Overseeing AI?

Although no strict guidelines specify who should take on the responsibility of overseeing AI, there is a unanimous consensus that those who do must undergo appropriate training, including on the risk of automation bias.

As an overview, all referenced documents in the European Union and the United States’s Blueprint emphasize the importance of training. Specifically, the AIA Recital 48 provides that overseers must have necessary competence, training, and authority,[26] the ALTAI requires “specific training,”[27] the Ethical Guidelines require overseers to be “properly qualified and trained,”[28] and the Blueprint requires training and assessment for any human-based portions of the system.[29]

The European Union’s AIA more clearly lays out the proposed requirements for those tasked with overseeing AI in Article 14(4).[30] Due to the clarity of the AIA, an international treaty should be modeled after the AIA’s Article 14(4). The importance of overseers being specifically made aware of and remaining aware of automation bias is also clear and should be included as it is mentioned in both the AIA and the Ethic Guidelines.[31]

The Scope of Human Oversight

Finally, an international treaty should require human oversight to be specific to the sector and context the AI system operates in.

The Guidelines acknowledge that the oversight required may depend on the AI system’s application area, the potential risk posed[32], what the system is being used for, and the existing security, safety, and control measures in place.[33] However, the Guidelines and AIA do not delve into significant detail regarding how the performance of human oversight is to be conducted.[34] Seeing as AI that requires human oversight falls into various sectors, and thus contexts and areas of society and the law,[35] it makes sense these specifics are not provided. Rather, proportionality is required, where all relative criteria ought to be “as appropriate for the circumstances.[36]” Generally, this suggests there ought to be an inverse relationship between human oversight and other safeguarding mechanisms, where “all other things being equal, the less oversight a human can exercise over an AI system, the more extensive testing and stricter governance is required.”[37]

Correspondingly, the United States’s Blueprint requires human oversight to be justified according to the relevant specific use case and tailored to the specific use case and real-world deployment scenario.[38]

Conclusion

A treaty establishing a system of human oversight of AI ought to cover only AI that falls within employment, education, and law enforcement/criminal justice sectors and require oversight specific to the sector and context. Overseers ought to be properly trained, including on the risks of automation bias, and ought to oversee the AI during its entire operation.

  1. Independent High-Level Expert Group On Artificial Intelligence Set Up by the European Commission, A Definition Of AI: Main Capabilities And Disciplines, at 6 (Apr. 8, 2019).
  2. Independent High-Level Expert Group On Artificial Intelligence Set Up by the European Commission, Ethics Guidelines for Trustworthy AI, at 5 (Apr. 8, 2019) [hereinafter Ethics Guidelines for Trustworthy AI]; Ben Green, The Flaws of Policies Requiring Human Oversight of Government Algorithms, 45 Comput. L. & Sec. Rev. 1, 2–5 (2022).
  3. Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, at 30 ¶ 48, SEC (2021) 167 final (Apr. 4, 2021) [hereinafter Proposal for AIA]; Ethics Guidelines for Trustworthy AI, supra note 2, at 16; Independent High-Level Expert Group On Artificial Intelligence Set Up by the European Commission, The Assessment List For Trustworthy Artificial Intelligence (ALTAI) for self assessment, at 8 (July 17, 2020) [hereinafter ALTAI].
  4. CBS News Sunday Morning with Jane Pauley, AI and the Military, Crimes Against Wildlife, Marty Baron, CBS News (Oct. 1, 2023) https://open.spotify.com/episode/4saI0GRXCS8j9DJ2AC5NTF?si=699b2fe3f6904371; Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, The White House, 47 (proposed October 2022), https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf [hereinafter Blueprint for an AI Bill of Rights]; Exec. Order No. 14110, 88 Fed. Reg. 75191 (Oct. 30, 2023).
  5. Proposal for AIA, supra note 3.
  6. Lena Enqvist, ‘Human Oversight’ in the EU Artificial Intelligence Act: What, When and by Whom?, 15 L., Innovation and Tech. 508, 509 (2023).
  7. Johann Laux, Institutionalised Distrust and Human Oversight of Artificial Intelligence: Toward a Democratic Design of AI Governance under the European Union AI Act 3 (August 2023) (revised working paper) (Oxford Internet Institute).
  8. Ethics Guidelines for Trustworthy AI, supra note 2.
  9. The other six requirements include technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination, and fairness; societal and environmental wellbeing; and accountability. Id., at 14–16.
  10. Id., at 16.
  11. Id.
  12. ALTAI, supra note 3.
  13. CBS News Sunday Morning with Jane Pauley, supra note 4.
  14. Philip Alexander, Reconciling Automated Weapon Systems with Algorithmic Accountability: An International Proposal for AI Governance, Harvard Int’l L. J. (Oct. 16, 2023), https://journals.law.harvard.edu/ilj/2023/10/reconciling-automated-weapon-systems-with-algorithmic-accountability-an-international-proposal-for-ai-governance.
  15. Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, The White House, https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence (last visited Nov. 3, 2023).
  16. See, for example, Annexes to the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, at 4–5, COM (2021) 206 final (Apr. 21, 2021) [hereinafter Annex III to Proposal for AIA], providing AI systems in employment contexts that would require human oversight, such as AI designed to recruit potential hires, screen applications, evaluate candidates, make decisions on promotions and terminations, and evaluate performance.
  17. See, for example, id., providing AI systems in education contexts that would require human oversight, such as AI designed to assign people to educational training institutions and assess participants in tests required for admission to educational institutions.
  18. See, for example, id.; Blueprint for an AI Bill of Rights, supra note 4, at 47, providing AI systems in law enforcement/criminal justice contexts that would require human oversight, such as AI that is designed to conduct pre-trail risk assessments, make parole decisions, and evaluate the reliability of evidence.
  19. Enqvist, supra note 6, at 516.
  20. Exec. Order No. 14110, supra note 4, at 75214; Blueprint for an AI Bill of Rights, supra note 4, at 47.
  21. Blueprint for an AI Bill of Rights, supra note 4, at 47.
  22. Enqvist, supra note 6, at 517.
  23. Ethics Guidelines for Trustworthy AI, supra note 2, at 8.
  24. Id.
  25. Exec. Order No. 14110, supra note 4, at 75215.
  26. Enqvist, supra note 6, at 528–29.
  27. ALTAI, supra note 3, at 8.
  28. Luke Scanlon, What Meaningful Human Oversight of AI Should Look Like, Pinsent Mason (Apr. 28, 2022, 1:35 pm), https://www.pinsentmasons.com/out-law/analysis/what-meaningful-human-oversight-of-ai-should-look-like.
  29. Blueprint for an AI Bill of Rights, supra note 4, at 49.
  30. Scanlon, supra note 28. See Proposal for AIA, supra note 3, at 51, laying out the proposed requirements for those tasked with overseeing AI in Article 14(4):

    (a) fully understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible;

    (b) remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (‘automation bias’), in particular for high-risk AI used to provide information or recommendations for decisions to be taken by natural persons;

    (c) be able to correctly interpret the high-risk AI system’s output, taking into account in particular the characteristics of the system and the interpretation tools and methods available;

    (d) be able to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system;

    (e) be able to intervene in the operation of the high-risk AI system or interrupt the system through a “stop” button or a similar procedure.

  31. Scanlon, supra note 28.
  32. Ethics Guidelines for Trustworthy AI, supra note 2, at 16.
  33. Scanlon, supra note 28.
  34. Enqvist, supra note 6, at 513, 525.
  35. Id. at 525.
  36. Id. at 520.
  37. Ethics Guidelines for Trustworthy AI, supra note 2, at 16
  38. Blueprint for an AI Bill of Rights, supra note 4, at 51.