The unprecedented threat from the novel coronavirus has confined many Americans to their homes, distancing them from one another at great cost to local economies and personal well-being. Meanwhile the pressure grows on American institutions to do something—anything—about the pandemic.
Encouraged by the White House, much of that pressure to act has focused on Silicon Valley and the tech industry, which has responded with a fragile digital solution. Tech companies and engineering departments at major universities are pinning their hopes of returning Americans to work and play on the promise of smartphone apps. Coronavirus? There’s an app for that.
We are concerned by this rising enthusiasm for automated technology as a centerpiece of infection control. Between us, we hold extensive expertise in technology, law and policy, and epidemiology. We have serious doubts that voluntary, anonymous contact tracing through smartphone apps—as Apple, Google, and faculty at a number of academic institutions all propose—can free Americans of the terrible choice between staying home or risking exposure. We worry that contact-tracing apps will serve as vehicles for abuse and disinformation, while providing a false sense of security to justify reopening local and national economies well before it is safe to do so. Our recommendations are aimed at reducing the harm of a technological intervention that seems increasingly inevitable.
We have no doubts that the developers of contact-tracing apps and related technologies are well-intentioned. But we urge the developers of these systems to step up and acknowledge the limitations of those technologies before they are widely adopted. Health agencies and policymakers should not over-rely on these apps and, regardless, should make clear rules to head off the threat to privacy, equity, and liberty by imposing appropriate safeguards.
Proposals to combat coronavirus using smartphones largely focus on facilitating the process of “contact tracing.” Contact tracing involves working backward from infected cases to identify people who may have been exposed to disease, so that they can be tested, isolated, and—when possible—treated. Traditional contact tracing is a labor-intensive process of interviews and detective work. Some countries such as Singapore, South Korea and Israel have enlisted technology, including mobile apps, to facilitate contact tracing of coronavirus cases, and this idea is now catching on in the United States. North Dakota and Utah have released voluntary contact-tracing apps that rely on tracking users’ location as they move about, and the consulting firm PwC has begun promoting a contact-tracing tool to permit employers to screen which employees can return to work. Several American technology companies and institutions of higher learning are developing the infrastructure that would permit automated contact tracing of a sort, while also avoiding certain privacy concerns.
Contact tracing can be an important component of an epidemic response especially when the prevalence of infection is low. Such efforts are most effective where testing is rapid and widely available and when infections are relatively rare—conditions that are currently unusual in the United States. Ideally, manual contact tracing by trained professionals can help identify candidates for testing and quarantine to help contain the spread of coronavirus.
The lure of automating the painstaking process of contact tracing is apparent. But to date, no one has demonstrated that it’s possible to do so reliably despite numerous concurrent attempts. Apps that notify participants of disclosure could, on the margins and in the right conditions, help direct testing resources to those at higher risk. Anything else strikes us as implausible at best, and dangerous at worst.
Lawmakers, for their part, must be proactive and rapidly impose safeguards with respect to the privacy of data, while protecting those communities who can be—and historically have been— harmed by the collection and exploitation of personal data. Protections need to be put in place to expressly prohibit economic and social discrimination on the basis of information and technology designed to address the pandemic. For example, academics in the United Kingdom have proposed model legislation to prevent compulsory or coerced use of these untested systems to prevent people from going back to work, school, or accessing public resources. The prospect of surveillance during this crisis only serves to reveal how few safeguards exist to consumer privacy, especially at the federal level.
At the end of the day, no clever technology—standing alone—is going to get us out of this unprecedented threat to health and economic stability. At best, the most visible technical solutions will do more than help on the margin. At a minimum, it is the obligation of their designers to ensure they do no harm.
Ashkan Soltani is an independent researcher and technologist specializing in privacy, security, and behavioral economics. He was previously a senior advisor to the U.S. Chief Technology Officer, the chief technologist for the Federal Trade Commission, and a contributor to the Washington Post team that in 2014 won a Pulitzer Prize for its coverage of national-security issues.
Ryan Calo is a professor of law at the University of Washington, with courtesy appointments in computer science and information science and the co-founder of two interdisciplinary research initiatives.
Carl Bergstrom is a professor of biology at the University of Washington with extensive experience in the epidemiology of emerging infectious diseases, which he is integrating into ongoing research on spread of disinformation through social and traditional media channels during the SARS-CoV-2 pandemic.