The latest twist on adding backdoors to encryption is spooky – and dangerous

Posted on Feb 23, 2019 by Glyn Moody

Authorities around the world have been calling for backdoors to be added to strong encryption for years – part of an even older battle. The view among all top security experts is that this is a very bad idea, since it is likely to add extra vulnerabilities to systems, which weakens the security for everyone. Despite that, Australia has gone ahead and passed a law requiring backdoors.

One reason the legislation was rushed through in its present dangerous form is that the main opposition party in Australia thought it would be able to improve things afterwards. Indeed, 12 days after the encryption law was passed, Australia’s Parliamentary Joint Committee on Intelligence and Security announced it would begin a review of the law. Even though leading technology companies and civil liberties organizations are all strongly against the law, it’s not clear the review will lead to any radical changes. Australia’s Digital Rights Watch group wants the entire law repealed:

Encryption is not a barrier to a safe society – quite the opposite – it is a form of protection against criminal acts, including state-sponsored hacking. Encryption plays a role in protecting our digital infrastructure, such as the banking system, the electricity grid and mass transit systems. This is the future of warfare and encryption is one of our few defences against criminal and aggressive acts. It is an important line of defence against bad actors, and we weaken it at our peril.

While Australia continues to argue about the use of “traditional” encryption backdoors, two senior officers from the UK’s signals intelligence agency, GCHQ, have published an interesting proposal that takes a different approach. It contains some welcome statements, such as: “Targeted exceptional access capabilities should not give governments unfettered access to user data.” They say they don’t propose that governments should have access to some kind of “global key” that can unlock any user’s data. They point out that “Government controlled global key escrow systems would be a catastrophically dumb solution in these cases.” They go on to propose what they see as an alternative to weakening strong encryption: silently adding law enforcement agents to otherwise encrypted conversations:

The service provider usually controls the identity system and so really decides who’s who and which devices are involved – they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication.

They say this is no more intrusive than the technique used in traditional voice intercept solutions – clipping on extra wires to circuits – that it doesn’t give any government power they shouldn’t have, and does not require backdoors that weaken security. That all sounds promising, but experts have criticized the idea for various reasons. For example, Susan Landau, a professor in the Department of Computer Science, Tufts University, says:

alligator clips, as they’re called on this side of the Atlantic, intercept communications, but they do so for communications for which the service provider has not made a commitment of providing end-to-end encryption. The difference between alligator clips and the proposed virtual crocodile clips [of GCHQ’s suggestion] is that in the latter, the service provider is being asked to change its communication system to provide exactly what the end-to-end encryption system was designed to prevent: access by a silent listener to the communication.

The Electronic Frontier Foundation is also unconvinced. A post on its site points out that for a system involving these kind of “ghost” participants to work would require client software to lie:

In WhatsApp’s UX [user experience], users can verify the security of a conversation by comparing “security codes” within the app. So for the ghost to work, there would have to be a way of forcing both users’ clients to lie to them by showing a falsified security code, as well as suppress any notification that the conversation’s keys had changed. Put differently, if GCHQ’s proposal went into effect, consumers could never again trust the claims that our software makes about what it’s doing to protect us.

Fiddling with the code in this way would increase the risk that new vulnerabilities would be introduced, and that other actors could use the same ghost function to eavesdrop on supposedly secure conversations. That’s obviously bad for users and society in general. But the EFF is right to emphasize the fundamental problem with the GCHQ proposal: that it would undermine trust in an application and the company that made it – hardly a desirable result. As well-known security expert Bruce Schneier puts it: “Communications companies could no longer be honest about what their systems were doing, and we would have no reason to trust them if they tried.”

Matthew Green, a professor at Johns Hopkins University, says providers of messaging software are aware that this is a potential weakness, and are already working to prevent users being misled by client software. The GCHQ proposal therefore amounts to a government agency ordering a software company not to harden their systems against that kind of attack. Green warns that this could be just the start of governments vetting software: “In the worst-case outcome, we’ll be appointing agencies like GCHQ as the ultimate architect of Apple and Facebook’s communication systems.”

The author of the EFF post mentioned above has co-written another, more technical critique of the GCHQ proposal. The analysis sees four likely routes for detecting when the ghost is present: binary reverse engineering, cryptographic side channels, network-traffic analysis, and crash log analysis. The post also points out a different kind of flaw in the idea:

There’s another pretty glaring problem with the ghost proposal that we’re not going to examine here – it only works with text or asynchronous protocols. It’s not immediately clear to us how it could be adapted to real-time audio or video communications.

As the various critiques above make clear, however superficially attractive the GCHQ proposal might seem, it is problematic from multiple viewpoints. It certainly doesn’t resolve the long-standing tension between a desire for the authorities to have access to communications protected with strong encryption, and the requirement for the public, businesses and government to be able to use the Internet as safely as possible.

Featured image by Max Pixel.