THE BLOG

Wanting It Bad Enough Won't Make It Work: Why Adding Backdoors and Weakening Encryption Threatens the Internet

12/16/2015 02:35 pm ET | Updated Dec 16, 2015

By Meredith Whittaker and Ben Laurie
Co-founders of Simply Secure, an organization that focuses on improving the design of secure technologies. This among many other things.

In the last months we've heard the calls to ban strong encryption, calls to force tech companies to offer law enforcement the means to decrypt all content and communications, and proposals to mandate communications systems include backdoors for law enforcement. These are manifestations of the desire that "those in authority" be given the ability to see private content and communications across all networked technologies--what's referred to as "exceptional access." (In using the term "exceptional access" we take our lead from the authors of this year's highly recommended "Keys Under Doormats" paper, which in turn followed the lead of the "1996 US National Academy of Sciences CRISIS report in using the phrase 'exceptional access' to mean that 'the situation is not one that was included within the intended bounds of the original transaction.'"--Seriously, if you're interested in this stuff it's definitive, and very clearly and accessibly written.)

We are not here to debate whether such access is useful from a policy perspective, i.e. whether it would work to stop bad guys. While a critical discussion that raises many worthy questions, we leave it to others. We are here to review the technical realities of networked systems, and to explore the impact and potential danger of such proposals from this perspective.

To put it bluntly: the call to provide law enforcement (or, anyone) exceptional access to communications and content poses a grave threat to the future of the Internet. It is simply not possible to give the good guys the access they want without letting the bad guys in. There's nothing new or novel in this statement. Experts have been saying the same thing for 20 years. While the message is old, with the integration of Internet technologies into nearly all aspects of life, the stakes are higher than they've ever been.

Why does providing such access pose a threat? To understand, let's look at the basics of how security is implemented online. Secure systems--this is, any system aiming to allow access to some and not to others (most networked systems)--rely on "keys." Keys in this context are cryptographic "passwords" used to validate the identity of a system's users (it's really me!), and ensure that the right people gain access to the right things, and the wrong people don't. This (among other things) is essential to enable me to log into my email, but not to yours; for you to see your documents, but not mine; and to keep criminals and ex-boyfriends from seeing either. It's this that makes online banking and commerce possible, and it's this that allowed the Internet to become an unprecedented driver of economic and social progress.

The engineers who build and maintain secure systems work tirelessly to ensure that keys and complimentary security measures are implemented correctly, letting only the right people gain access. (Implementation failures, and not cryptographic weaknesses, are the cause of most security vulnerabilities and breaches.) Important in the context of this conversation is that implementation errors and resulting security vulnerabilities are much more likely in complex systems. Adding exceptional access for law-enforcement is an addition of complexity par excellence.

Further, implementing exceptional access doesn't just add a generalized kind of complexity, which could be bad enough: it adds an additional "doorway" through which those not initially intended can gain access.

Machines don't know a bad guy from a good guy. Machines respond as they've been programmed to respond. Programming them to give up information to third parties cannot be guaranteed to limit access to only those intended: it limits access to anyone who is able to make a request for access in a way that the machines respond to. Vulnerabilities occur either because someone with privileged access illegally gives access to people who shouldn't have it, or someone who shouldn't have access hacks their way into gaining privileged access.

These risks are not theoretical: we know of no case where such an addition of exceptional access capabilities has not resulted in weakened security.

Take the Communications Assistance for Law Enforcement Act (CALEA), a 1994 law intended to make it easier for US law enforcement to tap phone conversations. Under this law, telephone companies had to design their systems to allow wiretapping--adding a vector for exceptional access. It was due to CALEA-mandated wiretapping capabilities that, in 2012 all of the Department of Defense's phone switches were reported to be vulnerable. Similar capabilities, built to comply with CALEA-like laws, were exploited to eavesdrop on the phone conversations of Greece's Prime Minister and at least 100 other dignitaries and politicians, some of them US diplomats. It was these same capabilities that were used to illegally tap phone conversations of at least 5000 people in Italy. In the Greek case, it's unclear who did it. In the Italian case, the crime appears to have been authorized by a high-ranking official at the Italian SISMI military intelligence agency. From our perspective it doesn't matter--if the means for exceptional access weren't there, these breaches almost certainly wouldn't have happened.

In addition to the technical complexities, there is an inescapable policy question: how do we determine whose law enforcement and government should be allowed to use this exceptional access and for what purpose? Who are the "good guys," and according to whom? Should the Chinese, Canadian, US, and South Sudanese governments all be granted access under the same terms? Whose agendas and policies do we favor, and what does consensus look like? How can such a large and complicated decision-making process make room for the type of innovation that made the Internet what it is? How do we audit access and make room for democratic oversight? And, finally, how do we program millions of machines to respect the huge and dynamic complexity of such decisions, assuming such a process is even possible?

Combine the technical realities with these procedural questions, and you have a recipe for a potentially catastrophic security disaster. Imagine if it weren't phone switches that were vulnerable via exceptional access, as in the CALEA case, but the computers that run critical national infrastructure (the power grid, nuclear reactors, airport traffic management, etc.), the databases that store medical records, the intellectual property of major enterprises, the engines of the global financial industry. In examples like the OPM breach the Sony Hack, and the attack on Ashley Madison we already see how difficult it is to secure systems against attack without--we presume--adding the complexity of exceptional access capabilities, which would only make such attacks more common and harder to prevent. It's not difficult to imagine the many kinds of harm that would result if exceptional access were deployed broadly.

None of this means that the job of tracking and apprehending terrorists and other wrongdoers on a global scale is easy. Or that the frustration felt by those tasked with keeping populations safe, especially in the wake of a tragedy like Paris isn't very real. However, the palpable immediacy of these problems does not mean that exceptional access is a workable idea (nor is there any evidence that it would have helped in Paris, even if it were). Put another way--however much it might appear like exceptional access is a silver bullet, it is not. Instead such a path would weaken our collective security.

CONVERSATIONS