Files
Download Full Text (572 KB)
Description
The authors of this essay represent an interdisciplinary team of experts in machine learning, computer security, and law. Our aim is to introduce the law and policy community within and beyond academia to the ways adversarial machine learning (ML) alter the nature of hacking and with it the cybersecurity landscape. Using the Computer Fraud and Abuse Act of 1986—the paradigmatic federal anti-hacking law—as a case study, we mean to evidence the burgeoning disconnect between law and technical practice. And we hope to explain what is at stake should we fail to address the uncertainty that flows from the prospect that hacking now includes tricking.
The essay proceeds as follows. Part I provides an accessible overview of machine learning. Part II explains the basics of adversarial ML for a law and policy audience, laying out the set of techniques used to trick or exploit AI as of this writing. This appears to be the first taxonomy of adversarial ML in the legal literature (though it draws from prior work in computer science).
Part III describes the current anti-hacking paradigm and explores whether it envisions adversarial ML. The question is a close one and the inquiry complex, in part because our statutory case study, the CFAA, is broadly written and has been interpreted expansively by the courts. We apply the CFAA framework to a series of hypotheticals grounded in real events and research and find that the answer is unclear.
Part IV shows why this lack of clarity represents a concern. First, courts and other authorities will be hard-pressed to draw defensible lines between intuitively wrong and intuitively legitimate conduct. How do we reach acts that endanger safety—such as tricking a driverless car into mischaracterizing its environment—while tolerating reasonable anti-surveillance measures—such as makeup that foils facial recognition—which leverage similar technical principles, but dissimilar secondary consequences?
Second, and relatedly, researchers interested in testing whether systems being developed are safe and secure do not always know whether their hacking efforts may implicate federal law. Here we join a chorus of calls for the government to clarify the conduct it seeks to reach and restrict while continuing to advocate for an exemption for research aimed at improvement and accountability. Third, designers and distributors of AI-enabled products will not understand the full scope of their obligations with respect to security. We advance a normative claim that the failure to anticipate and address tricking is as irresponsible or “unfair” as inadequate security measures in general.
Publication Date
2018
Publisher
University of Washington Tech Policy Lab
City
Seattle
Keywords
artificial intelligence, hacking, machine learning
Disciplines
Computer Law | Science and Technology Law
Recommended Citation
Ryan Calo, Ivan Evtimov, Earlence Fernandes, Tadayoshi Kohno & David O'Hair,
Is Tricking a Robot Hacking?,
(2018).
Available at:
https://digitalcommons.law.uw.edu/techlab/5