As technology continues to evolve, interactions between humans and artificial intelligence (“AI”) will skyrocket. It is important to understand the impact AI can have on society, as well as the potential harm and subsequent liability that could result, and to develop best practices designed to address them. The U.S. needs a comprehensive framework to govern the design, creation, use and risks associated with AI. At the time of this writing, no such framework has been implemented. This article takes a socio-legal, interdisciplinary approach to explore ideas on socio-ethical concerns and theories of liability related to AI, and applies a sociological perspective to assess existing legal frameworks that currently govern human-AI interaction. By adopting an interdisciplinary approach, this article seeks to encourage holistic and robust dialogue about how AI could be developed and operated, hoping that humans and AI can coexist harmoniously. It also proposes a framework to regulate such development in the U.S. There are a few limitations in this article. First, due to the accelerated pace of technological change, the future state of AI will be different from the current state. Hence, the framework proposed in this article might eventually become obsolete. Second, this article is derived from secondary sources and, although the information collected includes rich empirical data, no primary data was generated other than the authors’ views. Third, only specific aspects of AI were selected for analysis – there are other factors in policy, sociology and law that are not addressed. Lastly, this article is primarily focused on Western cultures, North America and Europe in particular; hence, it might not be applicable globally.
Michael Callier & Harly Callier,
Blame It on the Machine: A Socio-Legal Analysis of Liability in an AI World,
14 Wash. J. L. Tech. & Arts
Available at: https://digitalcommons.law.uw.edu/wjlta/vol14/iss1/4