•  
  •  
 

Washington International Law Journal

Abstract

As debates on potential societal harm from artificial intelligence (AI) culminate in legislation and international norms, a global divide is emerging in both AI regulatory frameworks and international governance structures. In terms of local regulatory frameworks, the European Union (E.U.), Canada, and Brazil follow a “horizontal” or “lateral” approach that postulates the homogeneity of AI, seeks to identify common causes of harm, and demands uniform human interventions. In contrast, the United States (U.S.), the United Kingdom (U.K.), Israel, and Switzerland (and potentially China) have pursued a “context-specific” or “modular” approach, tailoring regulations to the specific use cases of AI systems. In terms of international governance structures, the United Nations is exploring a centralized AI governance framework to be overseen by a superlative body comparable to the International Atomic Energy Agency. However, the U.K. is spearheading, and the U.S. and several other countries have endorsed, a decentralized governance model, where AI safety institutes in each jurisdiction conduct evaluations of the safety of high-performance general-purpose models pursuant to interoperable standards. This paper argues for a context-specific approach alongside decentralized governance, to effectively address evolving risks in diverse mission-critical domains, while avoiding social costs associated with one- size-fits-all approaches. However, to enhance the systematicity and interoperability of international norms and accelerate global harmonization, this paper proposes an alternative contextual, coherent, and commensurable (3C) framework. To ensure contextuality, the framework (i) bifurcates the AI life cycle into two phases: learning and deployment for specific tasks, instead of defining foundation or general-purpose models; and (ii) categorizes these tasks based on their application and interaction with humans as follows: autonomous, discriminative (allocative, punitive, and cognitive), and generative AI. To ensure coherency, each category is assigned specific regulatory objectives replacing 2010s vintage “AI ethics.” To ensure commensurability, the framework promotes the adoption of international standards for measuring and mitigating risks.

Included in

Law Commons

Share

COinS