The attack took advantage of a vulnerability in the Windows operating system that the federal government had been aware of for years but had chosen not to tell Microsoft about until just months before the WannaCry attack began.
That history and the potential for more releases in the coming weeks have intensified the debate around how governments and spy agencies should act when they discover weaknesses in computer software.
It’s a choice of how best to protect the public: Exploit software vulnerabilities to collect intelligence information that may help keep people safe? Or disclose the flaw, letting the software company fix it and protect millions of regular computer users from malicious attacks by hackers?
For years, the U.S. National Security Agency used a flaw in the Windows operating system, nicknamed “EternalBlue,” to spy on intelligence targets, gathering information from their computer files and electronic communications.
In April, a hacking group called the Shadow Brokers reported that it had breached the network of, and stolen information from, computers used by the Equation Group, which has not identified itself but is widely believed to be part of the NSA.
The Shadow Brokers revealed information about extremely sophisticated digital tools for attacking military, political and economic targets worldwide. One of those tools was “EternalBlue.”
In May, a hacker or hacking group released a piece of malicious software using “EternalBlue” to hijack computers, encrypt the data on them and charge victims a ransom to restore access to their information.
If the NSA had told Microsoft about the flaw five years ago, things could have unfolded differently. In particular, users could have had much more time to update their software – which would have substantially increased the number of people protected against the vulnerability.
Using ‘Zero Days’
The most serious cyberattacks are those that use previously unknown vulnerabilities. They are called “zero day” exploits because the developers had no time to fix it before trouble began, and nobody is protected.
Using these vulnerabilities can be effective. For instance, the NSA used four zero-day vulnerabilities as part of a series of cyberattacks on Iran’s nuclear enrichment sites.
That effort, officially code-named “Olympic Games,” created the program known to the public as “Stuxnet,” which damaged about 1,000 centrifuges and may have helped force Iran to negotiate with the U.S. about its nuclear program.
Should They Keep The Secret?
By not telling software companies about newly identified vulnerabilities, government agencies such as the NSA and CIA serve their own purposes of finding ways to gather intelligence undetected. But they also endanger critical systems of governments and regular users alike.
The U.S. does not have strong and clear policies with which to handle this problem. In January 2014, the Obama administration ordered spy agencies to disclose weaknesses they find – but with a significant loophole: If a software flaw has “a clear national security or law enforcement” use, the government can keep the flaw secret and exploit it.
These are complex trade-offs involving many questions: What might spies learn by exploiting the vulnerability? How likely is it that adversaries could find it? What might happen if they use it? Can the secret be kept securely and reliably? Regardless of the ethics questions about how these agencies should best carry out their duty of protecting the public, the decision will likely end up as a political one, about how the government should use its power.