The University of Washington Tech Policy Lab is an interdisciplinary research unit that spans the School of Law, the Information School, and the Paul G. Allen School of Computer Science and Engineering and that aims to enhance technology policy through research, education, and thought leadership.
The Lab produces whitepapers and sponsors lectures.
Ryan Calo, Ivan Evtimov, Earlence Fernandes, Tadayoshi Kohno, and David O'Hair
The authors of this essay represent an interdisciplinary team of experts in machine learning, computer security, and law. Our aim is to introduce the law and policy community within and beyond academia to the ways adversarial machine learning (ML) alter the nature of hacking and with it the cybersecurity landscape. Using the Computer Fraud and Abuse Act of 1986—the paradigmatic federal anti-hacking law—as a case study, we mean to evidence the burgeoning disconnect between law and technical practice. And we hope to explain what is at stake should we fail to address the uncertainty that flows from the prospect that hacking now includes tricking.
The essay proceeds as follows. Part I provides an accessible overview of machine learning. Part II explains the basics of adversarial ML for a law and policy audience, laying out the set of techniques used to trick or exploit AI as of this writing. This appears to be the first taxonomy of adversarial ML in the legal literature (though it draws from prior work in computer science).
Part III describes the current anti-hacking paradigm and explores whether it envisions adversarial ML. The question is a close one and the inquiry complex, in part because our statutory case study, the CFAA, is broadly written and has been interpreted expansively by the courts. We apply the CFAA framework to a series of hypotheticals grounded in real events and research and find that the answer is unclear.
Part IV shows why this lack of clarity represents a concern. First, courts and other authorities will be hard-pressed to draw defensible lines between intuitively wrong and intuitively legitimate conduct. How do we reach acts that endanger safety—such as tricking a driverless car into mischaracterizing its environment—while tolerating reasonable anti-surveillance measures—such as makeup that foils facial recognition—which leverage similar technical principles, but dissimilar secondary consequences?
Second, and relatedly, researchers interested in testing whether systems being developed are safe and secure do not always know whether their hacking efforts may implicate federal law. Here we join a chorus of calls for the government to clarify the conduct it seeks to reach and restrict while continuing to advocate for an exemption for research aimed at improvement and accountability. Third, designers and distributors of AI-enabled products will not understand the full scope of their obligations with respect to security. We advance a normative claim that the failure to anticipate and address tricking is as irresponsible or “unfair” as inadequate security measures in general.
Matthew Bellinger, Ryan Calo, Brooks Lindsay, Emily McReynolds, Mackenzie Olson, Gaites Swanson, Boyang Sa, and Feiyang Sun
The advent of automated vehicles (AVs)—also known as driverless or self-driving cars—alters many assumptions about automotive travel. Foremost, of course, is the assumption that a vehicle requires a driver: a human occupant who controls the direction and speed of the vehicle, who is responsible for attentively monitoring the vehicle's environment, and who is liable for most accidents involving the vehicle. By changing these and other fundamentals of transportation, AV technologies present opportunities but also challenges for policymakers across a wide range of legal and policy areas. To address these challenges, federal and state governments are already developing regulations and guidelines for AVs.
Seattle and other municipalities should also prepare for the introduction and adoption of these new technologies. To facilitate preparation for AVs at the municipal level, this whitepaper—the result of research conducted at the University of Washington's interdisciplinary Tech Policy Lab—identifies the major legal and policy issues that Seattle and similar cities will need to consider in light of new AV technologies.
Emily McReynolds, Sarah Hubbard, Timothy Lau, Aditya Saraf, Maya Cakmak, and Franziska Roesner
Hello Barbie, CogniToys Dino, and Amazon Echo are part of a new wave of connected toys and gadgets for the home that listen. Unlike the smartphone, these devices are always on, blending into the background until needed. We conducted interviews with parent-child pairs in which they interacted with Hello Barbie and CogniToys Dino, shedding light on children’s expectations of the toys’ “intelligence” and parents’ privacy concerns and expectations for parental controls. We find that children were often unaware that others might be able to hear what was said to the toy, and that some parents draw connections between the toys and similar tools not intended as toys (e.g., Siri, Alexa) with which their children already interact. Our findings illuminate people’s mental models and experiences with these emerging technologies and will help inform the future designs of interactive, connected toys and gadgets. We conclude with recommendations for designers and policy makers.
Ryan Calo, Tamara Denning, Batya Friedman, Tadayoshi Kohno, Lassana Magassa, Emily McReynolds, Bryce Clayton Newell, and Jesse Woo
The vision for AR dates back at least until the 1960s with the work of Ivan Sutherland. In a way, AR represents a natural evolution of information communication technology. Our phones, cars, and other devices are increasingly reactive to the world around us. But AR also represents a serious departure from the way people have perceived data for most of human history: a Neolithic cave painting or book operates like a laptop insofar as each presents information to the user in a way that is external to her and separate from her present reality. By contrast, AR begins to collapse millennia of distinction between display and environment.
Today, a number of companies are investing heavily in AR and beginning to deploy consumer-facing devices and applications. These systems have the potential to deliver enormous value, including to populations with limited physical or other resources. Applications include hands-free instruction and training, language translation, obstacle avoidance, advertising, gaming, museum tours, and much more.
AR also presents novel or acute challenges for technologists and policymakers, including privacy, distraction, and discrimination.
This whitepaper—which grows out of research conducted across three units through the University of Washington’s interdisciplinary Tech Policy Lab—is aimed at identifying some of the major legal and policy issues AR may present as a novel technology, and outlines some conditional recommendations to help address those issues. Our key findings include:
1. AR exists in a variety of configurations, but in general, AR is a mobile or embedded technology that senses, processes, and outputs data in real-time, recognizes and tracks real-world objects, and provides contextual information by supplementing or replacing human senses.
2. AR systems will raise legal and policy issues in roughly two categories: collection and display. Issues tend to include privacy, free speech, and intellectual property as well as novel forms of distraction and discrimination.
3. We recommend that policymakers—broadly defined—engage in diverse stakeholder analysis, threat modeling, and risk assessment processes. We recommend that they pay particular attention to: a) the fact that adversaries succeed when systems fail to anticipate behaviors; and that, b) not all stakeholders experience AR the same way.
4. Architectural/design decisions—such as whether AR systems are open or closed, whether data is ephemeral or stored, where data is processed, and so on—will each have policy consequences that vary by stakeholder.
Emily McReynolds, Adam Learner, Will Scott, Franziska Roesner, and Tadayoshi Kohno
We study legal and policy issues surrounding crypto currencies, such as Bitcoin, and how those issues interact with technical design options. With an interdisciplinary team, we consider in depth a variety of issues surrounding law, policy, and crypto currencies—such as the physical location where a crypto currency’s value exists for jurisdictional and other purposes, the regulation of anonymous or pseudonymous currencies, and challenges as virtual currency protocols and laws evolve. We reflect on how different technical directions may interact with the relevant laws and policies, raising key issues for both policy experts and technologists.