A lot of computers, including those at hospitals and other critical institutions, are being hit by a new wave of ransomware. The weaponized parts of this software were developed by – and inevitably leaked from – the National Security Agency. This shows again that the NSA’s mission, keeping a nation safe, is in direct conflict with its methods.
The ethical hacker community has claimed for a long time that the NSA’s mission is in direct conflict with its methods: an agency cannot keep a country safe by keeping its weaknesses secret from those who could fix them. In this case, the NSA also weaponized software to target those weaknesses – software that inevitably leaked, as all data does sooner or later.
It should be pointed out, of course, that the entire piece of ransomware was not written by the NSA. But the infection code, the critical part that allows the ransomware to function as a weapon and a means of extortion, was. Given the weaponized code and the means to use it, somebody will use it for whatever means they find profitable or useful.
As a result, we’re seeing some 50,000 computers worldwide infected with a worm that demands payment to restore functionality, using weaponized code that the NSA developed. More worryingly, British hospitals have been hit by this worm, completely negating the ability of British healthcare to function: an estimated 90% of British hospitals are still running Windows XP, a vulnerable Microsoft operating system which is far out of any support window.
News from the non-English-speaking world have yet to emerge on the impact, but there’s no reason to believe it’s much better.
In this, it’s appropriate to direct your anger at the right party responsible, given what we know of how the world works. It’s easy to observe that it’s a crime to distribute ransomware like this, but it’s also practically a law of nature that given the weaponized software, somebody is going to use it for something. (Other mass infections are instead using servers and gaming computers to mine bitcoin instead of demanding it in payment, which doesn’t impair operations more than a slightly elevated electricity bill.)
So how do you prevent the existence of weaponized software? How do you prevent weapons of mass destruction coming out into the wild like this?
The obvious answer is “don’t write the weaponized software in the first place”, which the NSA did. But it’s more complicated than that.
It’s also a matter of “inform software vendors of vulnerabilities to help keep us all safe”, which the NSA didn’t because if they did, their weaponized software would be far less effective going forward if the vulnerability was removed, so they have an interest in keeping the systems unsafe.
There’s also obviously a matter of “install security patches provided by software vendors”, which the British healthcare didn’t, in this story. Various British politicians tried mudslinging about this, which didn’t go over too well in public opinion with hospitals still closed.
The lesson here is that the NSA’s mission, keeping a country safe, is in direct conflict with its methods of collecting a catalog of vulnerabilities in critical systems and constructing weapons to use against those systems, weapons that will always leak, instead of fixing the discovered weaknesses and vulnerabilities that make us unsafe.
Of course, the NSA has equivalents in other countries that are not the slightest bit better in this behavior.