Washington Journal of Law, Technology & Arts


Kevin Frazier


This paper explains the need for an international AI research initiative. The current focus of lawmakers at the subnational, national, and international level on regulation over research has created an imbalance, neglecting the critical role of continuous, informed research in developing laws that keep pace with rapid technological advancements in AI.

The proposed international AI research initiative would serve as a central hub for comprehensive AI risk analysis, modeled on successful precedents like CERN and the IPCC. CERN exemplifies a collaborative research environment with pooled resources from member states, leading to significant advancements in particle physics. Similarly, the IPCC has successfully consolidated and synthesized global climate research, informing policy decisions on an international scale. Drawing from these models, the initiative aims to provide accurate, timely assessments of AI risks, aiding policymakers worldwide and ensuring that AI development benefits all of humanity, not just technologically-advanced nations.

This paper also highlights the dichotomy in AI risk perspectives—near-term concerns like algorithmic bias versus existential threats like the empowerment of authoritarian regimes. This division often detracts from a unified approach to funding and researching all potential AI risks comprehensively. The necessity for an international body becomes evident as individual nations and private entities tend to focus on regional and domestic agendas, which are insufficient to address the global nature of AI risks.

Discussing various national and subnational efforts, the paper critiques their limited scope and emphasizes the inadequacies of these isolated initiatives in tackling global AI challenges. Instead, it calls for an international approach that can leverage global expertise and resources more effectively, similar to CERN’s resource pooling and the IPCC’s consensus driven research aggregation.

In summary, the paper argues for a shift in focus from predominantly regulatory efforts to a balanced approach where informed, well-researched guidelines shape global AI policies. This shift is crucial to developing a regulatory framework that is responsive to the rapid advancements and broad implications of AI technologies. By fostering a robust international research initiative, stakeholders can ensure that AI development is guided by comprehensive risk assessments and ethical considerations, promoting a safer, more equitable technological future.