A Serious Concern About Post-Quantum Cryptography and Strength Targets

Posted on Mar 6, 2019 by Derek Zimmer

I have been writing a number of articles about the state of the NIST post-quantum cryptography competition, with short summaries on the projects and what they are all about.

What I haven’t talked about is the structure of the competition, and my concerns about the inclusion of relative “strength targets” for the new algorithms to be selected. NIST has standardized “security levels” for encryption, where you cite the latest academic research about the properties of your proposed encryption, and try to assign a “strength” of your proposed scheme based on the current research. You cite both the strengths and the weaknesses that are known about your proposal and justify your choices based on the work of others. It is somewhere between a mathematical proof and peer-review. At the end of this road you come up with a number, which represents a relative “bit strength” that is supposed to be modeled after different strengths of AES (the current encryption standard) and SHA (the current hashing standard).

The “security levels” for the NIST post-quantum competition are defined as follows:

Security level 1 – Any attack that breaks the relevant security definition must require computational resources comparable to or greater than those required for key search on a block cipher with a 128-bit key (e.g. AES128)

Security level 2 – Any attack that breaks the relevant security definition must require computational resources comparable to or greater than those required for collision search on a 256-bit hash function (e.g. SHA256/ SHA3-256)

Security level 3 – Any attack that breaks the relevant security definition must require computational resources comparable to or greater than those required for key search on a block cipher with a 192-bit key (e.g. AES192)

Security level 4 – Any attack that breaks the relevant security definition must require computational resources comparable to or greater than those required for collision search on a 384-bit hash function (e.g. SHA384/ SHA3-384)

Security level 5 – Any attack that breaks the relevant security definition must require computational resources comparable to or greater than those required for key search on a block cipher with a 256-bit key (e.g. AES256)

The Goal: Find New Encryption for 2020 and Beyond.

The primary goal of the competition is to find new asymmetric ciphers (to use as handshakes to establish secure connections) and to find a new authentication system to replace DSA/ECDSA/RSA to establish message authenticity and integrity checking. Both of these options need to be able to meet security level 5, which is the AES256 equivalent strength.

The Problem: Thin Margins of Safety

The problem with capping the strength on new cryptography is that new serious breakthroughs could be on the horizon that exponentially speed up analysis of these new algorithms. With such a large unknown factor, targeting today’s relative strength seems like an odd choice, especially knowing that we have situations where state-level actors are collecting encrypted data and storing it away until they have the technology to break it.

This thinking is not unfounded as it has already happened with a new analysis method for Ring Learning With Errors cryptography (R-LWE). In this paper, a method is described where weak rings can be found that have isogenous rings that are easier to solve. These flawed rings allow a sqrt (or faster) speedup of analysis. These are the kinds of speedups that can allow trivial breaking of encryption. A “Security Level 1” cipher that has a breakthrough that allows a sqrt speedup on analysis suddenly only has 64 or less “bits of strength”, which is very breakable.

To me, with real-world evidence that speedups like this on the current candidates are possible, it seems reckless to not insert a huge margin of safety into these encryption schemes, so that they resist advancements in technology.

The Current Approach is not Prudent

NIST’s approach seems to be “we’ll make the keys larger if a speedup is found.” Which does not seem to take into account that a significant speedup would allow all encryption up to that point to be broken, AND would require everyone to update/upgrade to support the new key lengths.

Yes, larger keys and formula values are slower to calculate, but many of these new encryption schemes are actually computationally faster than current ones, and there’s room to improve the margin of safety.

Here’s hoping that the finalists for a new standard can be tuned by the end user for much larger margins of safety.