Calls for “Lightweight” Encryption are Short-Sighted and Dangerous

Posted on May 8, 2019 by Derek Zimmer

As we march on into the “Internet of Things” where your fridge and toaster need security updates… There has been a call from multiple companies and organizations to standardize lightweight encryption.

“Lightweight Encryption” is encryption that costs as little work as possible for a processor to get to get to a place where security assurances are reasonable. The idea is that cheap IoT devices don’t have the processing power nor the batteries to spare for the current encryption standards like AES and ECDH.

There is a NIST competition going on right now to adopt a new lightweight encryption standard that targets 112-bits of security.

There are two problematic assumptions that I see with this competition and the overall concept of lightweight cryptography. One is the concept of a “barely safe” margin of safety and a the second is that hardware advancements are eliminating the need for lightweight cryptography altogether.

Narrow Margins of Safety are Dangerous

One thing that is talked about often in cryptography circles is “bits of safety” or “bits of strength.” This number is supposed to represent the number of attempts that it will take to execute a brute-force attack against the algorithm. There are other properties that can be interesting, like whether an attack can be parallelized or not (ran on many computers at once or not), but typically, the number of “bits” is the primary figure for determining strength at a glance.

Typical targets for strength are between 128-bits and 256-bits. This means that it would take 2^128 – 2^256 operations to break the cipher reliably. These numbers are crucially important, because they determine how much computing power you have to throw at a cipher to break it, and thus the cost in hardware and electricity to get at the underlying information beneath the encryption.

It is also crucial to understand that cryptography is a constantly moving target. Attacks always improve against a particular algorithm, and consequently, as time marches on our ciphers get weaker. Sometimes an optimization in analysis can lead to devastating increases in efficiency, as we have seen in the past with MD5 and large efficiency benefits against AES-256 due to key scheduling problems. The AES key scheduling flaw is particularly important here, as it weakened AES from 256-bits of security down to 119-bits. This enormous drop in strength is luckily still within a reasonable margin of safety, but it demonstrates a crucial axiom. Huge optimizations have occurred in worldwide cryptographic standards before, and they are likely to happen again.

So when we glance back at the NIST proposal and see 112-bits of security as the target, we should consider what a square-root or faster speedup would do to this if it were a worldwide standard at the time. 56-bits of security can be brute forced rather quickly by an adversary with access to a lot of computing power.

So we need to think about the hubris of targeting 112-bits of security. NIST is saying that it is so confident in the algorithm and its implementations that there will be no significant advances against it in its useful lifetime. History says that this simply is not true.

Hardware Advances are Eliminating the Need for Faster Crypto

With the rapid advancement in chip technology, we are seeing gate sizes in chips in 14nm, 10nm, and  this year we have 7nm with processes for even smaller gates in development. It is important to understand that in chip-design, there has been a performance ceiling for new chips due to thermal and power constraints, but on the low-end, there is ample space for circuitry.

This is because there is a minimum amount of surface area that a chip needs in order to be able to dissipate the heat that it generates. If you go smaller, your chip will overheat, so even if you have blank silicon areas that do nothing, your chip has a minimum size.

This is key because when we talk about the Internet of Things and cryptography, we are talking about low-end chips with physical space to spare on their silicon. This means that we can add specialized hardware to our designs, essentially for free.

ASICs are specialized circuits that have operations hard-wired into their circuitry. Operations that are accelerated by ASICs are very fast and consume far less power than doing the same operation on your regular processor. This property is crucially important, because ASICs specifically designed to speed up cryptography already exist in many forms. Intel, AMD, ARM, and many others all have chips that accelerate encryption and decryption.

You can actually see the phenomenon of “filling empty space” being done in the mobile space. Moving from 10nm gates to 7nm gates doubles the density of the circuits, doubling the amount of space available for said circuits. Mobile phone makers are not opting for 16-core phones because there’s no performance benefit to doing so. Instead they are opting for things like ASICs for processing photos, video, sound, and yes, cryptography faster. These ASICs make sense from a phone manufacturer’s perspective. They increase speed and power efficiency for specific tasks whereas adding more cores would do nothing for the vast majority of mobile users. The IoT market has the same needs and will make the same decisions for their own chips.

So we have low-end chips with space to spare, and existing technology that speeds up cryptography ready to fill the space. With these two factors combined, you have to wonder why we need “lightweight cryptography” at all when hardware acceleration with ASICs eliminates the need?

In summation: The thin margins of safety combined with advancements in hardware make this entire exercise a waste of time and resources.