Why Governments Won’t Let Go Of Secret Software Bugs

Why Governments Won’t Let Go Of Secret Software Bugs
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

By Lily Hay Newman for WIRED.

Getty Images

It’s been three days since WannaCry ransomware attacks began rippling across the world, affecting more than 200,000 people and 10,000 organizations in 150 countries. And the threat of further infection still looms.

The pervasiveness of WannaCry reveals just how insidious wide-scale ransomware attacks can be, endangering public infrastructure, commerce, and even human lives. But the implications of the incident don’t end there. The attack has transformed from an acute situation to be dealt with by security experts to a symbol of how fundamentally vital cybersecurity protection is and the true scale of what can happen when systems and devices lack crucial defenses. The far-reaching consequences of WannaCry has also revived a nuanced and long-standing debate about just how much risk the public should be exposed to when intelligence agencies secretly take advantage of vulnerabilities in consumer products.

Stockpiling Vulnerabilities

WannaCry’s evolution is the latest example. The attack spread by exploiting a Windows server vulnerability known as EternalBlue. The NSA discovered the bug and was holding on to it, but information about it and how to exploit it was stolen in a breach and then leaked to the public by a hacking group known as the Shadow Brokers. Microsoft issued a fix in mid-March, but many computers and servers never actually received the patch, leaving those systems open to attack. By holding on to this information instead of directly disclosing the vulnerability to manufacturers, this NSA espionage technique — ostensibly meant to protect people — caused a great deal of harm. And there’s no sign that groups like the NSA will discontinue this practice in the future.

“Even if what the NSA and the US government did is entirely right, it’s also OK for us to be outraged about this — we’re angry if a cop loses his gun and then it gets used in a felony,” says Jason Healey, a cyberconflict researcher at Columbia University, who studies the US government’s existing vulnerability and exploit disclosure process. “I think the government’s response to this is often ‘Look, this is espionage, it’s how the game is played, quit crying.’ And that’s just not cutting it. Everyone is right to be outraged and the government needs a better way of dealing with this.”

There’s certainly plenty of outrage that an NSA spy tool was stolen in the first place, then leaked, and then exploited to the detriment of individuals and businesses around the world.

“An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen,” Brad Smith, the president and chief legal officer of Microsoft, wrote on Sunday. “This attack provides yet another example of why the stockpiling of vulnerabilities by governments is such a problem. … We need governments to consider the damage to civilians that comes from hoarding these vulnerabilities and the use of these exploits.”

It is vitally important that tech companies release patches in an accessible way and that customers — both individuals and institutions — apply those patches. Experts agree that the tech community and its users share responsibility for the WannaCry fallout given that Microsoft had released a protective patch that wasn’t installed widely enough. But with intelligence agencies around the world essentially betting against this process, their decisions can have an outsized impact. Even Russian President Vladimir Putin invoked this reasoning while speaking in Beijing on Monday. “Genies let out of bottles like these, especially if they’ve been created by the secret services, can then harm even their own authors and creators,” he said.

Who Determines the Greater Good?

For its part, the US has been developing and implementing a program called the Vulnerabilities Equities Process since 2010. It requires intelligence agencies that obtain zero-day (i.e. previously unknown) vulnerabilities and/or exploits to disclose them within the government for review. The idea is to determine on a case-by-case basis whether a greater public good is served by keeping a particular vulnerability secret for espionage purposes or by disclosing it so the manufacturer can issue a patch and protect users at large.

So far the process has proved imperfect, and in fact, there is evidence that some agencies have been shielding bugs from oversight. “How do you reconcile [intelligence agencies’] stated need to use these tools and keep them secret with the fact that they keep leaking or being stolen and with the fact that they don’t seem to be accounting for that risk,” says Andrew Crocker, a staff attorney at the Electronic Frontier Foundation. “We need to have a reform of VEP or something like it where those risks are properly accounted for.”

Experts say that one possibility is to create a mechanism through which tech companies can participate in intelligence oversight when it comes to vulnerabilities in their products. Such an arrangement would be a major departure for spy groups used to extensive independence and secrecy, but companies that bear significant responsibility when spy tools leak could work as a check on agencies. “There just has to be balance,” says Stephen Wicker, a computer engineering professor at Cornell University who studies privacy and regulation. “The corporations themselves have to be involved in this line drawing somehow.”

There’s no reason to think that intelligence groups will stop seeking out and using undisclosed vulnerabilities and exploits, but WannaCry may serve as a more effective wakeup call for the intelligence community than past incidents simply because of its scale and impact on vital services likes hospitals. “Whether it results in changing anything on the inside, we the public don’t really have any way of knowing. There are mechanisms like congressional oversight and reporting, but it’s all discretionary,” EFF’s Crocker says. “So I hope that’s an actionable thing that comes out of this — it does seem like everyone agrees that transparency and reporting and oversight and auditing of this area of the intelligence community is very much needed.”

And one concrete thing agencies can do to reduce incidental impact is devote even more resources and effort to securing their digital tools. Perfect security is impossible, but the more control intelligence groups can maintain, the less danger these spy tools pose.

“Perfect security is impossible, but the more control intelligence groups can maintain, the less danger these spy tools pose.”

“You cannot do modern espionage without these capabilities,” Columbia’s Healey says. “If you want to know what the Islamic State is doing if you want to keep track of loose nukes in central Asia, if you want to follow smugglers who are trying to sell plutonium, this is the core set of capabilities that you need to do that. [But] a minimum role of public policy is if you’re going to weaponize the IT made by US companies and depended on by citizens, for fuck’s sake at least keep it secret. If you’re going to have to do this, then don’t lose it.”

More from WIRED:

Popular in the Community

Close

What's Hot