Can hardware ever be trusted? The Betrusted project aims to find out by going back to basics

Posted on Jan 22, 2020 by Glyn Moody

As previous posts have noted, the Internet of Things is being widely embraced in the form of so-called “smart speakers” and other devices. That’s despite the fact that few of these hardware systems can be regarded as secure: leaks of personal data can and do occur in multiple ways. Mostly, that is because the software has flaws and even backdoors. It is generally accepted that open source code is the best way to minimize such problems. Because anyone – including experts – can inspect the software, security weaknesses can be caught. That doesn’t mean they will be caught, and open source software is not a panacea. But even optimistically assuming all the main software flaws are spotted and fixed, that still leaves another crucial question that is rarely considered: can the underlying hardware ever be trusted? And if it can’t, what happens to privacy?

Those issues are addressed in a fascinating post by Andrew “bunnie” Huang. He also gave a talk on the topic at last year’s Chaos Computer Club conference, with the title “Open Source is Insufficient to Solve Trust Problems in Hardware“. Drawing on his long experience in designing open hardware systems, he comes to a rather sobering conclusion:

open hardware is precisely as trustworthy as closed hardware. Which is to say, I have no inherent reason to trust either at all. While open hardware has the opportunity to empower users to innovate and embody a more correct and transparent design intent than closed hardware, at the end of the day any hardware of sufficient complexity is not practical to verify, whether open or closed. Even if we published the complete mask set for a modern billion-transistor CPU, this “source code” is meaningless without a practical method to verify an equivalence between the mask set and the chip in your possession down to a near-atomic level without simultaneously destroying the CPU.

The reason why is hinted at in the last sentence. Open source code can be checked by experts as it is being written, and then a mathematical “hash” signature generated that is designed to ensure the software that is downloaded on to a user’s system is identical with the original. However, as Huang explains in his post, it is not possible to check in the same way that hardware has not been compromised during its physical delivery. There are too many points at which the hardware could have been modified in ways that are hard to detect. That’s serious, because if we can’t trust our hardware, we can’t trust even open source software, since we have to use hardware – possibly compromised – to check whether the software hashes are correct.

In order to explore whether those problems can be overcome, Huang and several colleagues have started a privacy project called Betrusted. It’s “a protected place for your private matters. It’s built from the ground up to be checked by anyone, but sealed only by you. Betrusted is more than just a secure CPU – it is a system complete with screen and keyboard, because privacy begins and ends with the user.” Its aim is to create a secure communication device whose hardware can be trusted, and which does protect privacy.

Betrusted is not a phone: it is a secure enclave with auditable input and output surfaces. Betrusted relies on sharing your existing connectivity – such as your phone or cable modem – to access the Internet. Say you’re on the road and you want to securely message a friend. You would tether betrusted to your phone’s wifi, so that the phone is just an untrusted relay for encrypted messages coming too and from betrusted. The only place the decrypted messages will ever appear is on the trusted screen of a betrusted device.

By limiting the functionality of the Betrusted communication device, Huang and his collaborators are able to produce hardware that is built out of very simple elements. Sufficiently simple, in fact, that they can be checked for tampering by the average user – thus avoiding the problems of the insecure delivery chain.

The biggest problem is the main processor chip. Today, these are built from many layers, each containing millions of microscopic semiconductors. That means it is pretty much impossible to check that each layer matches the original design without destroying the whole package. By simplifying the device, Huang’s team is able to use a field-programmable gate array (FPGA). This is a kind of blank processor chip that is designed to be configured by the user after it has been manufactured. In effect, this turns it into a piece of software, which means that checks can be carried out to ensure that tampering has not taken place.

The other challenging elements are the keyboard and the screen. Typically, these are complex pieces of hardware that are difficult for non-technical users to inspect for signs of tampering. Again, Huang and his colleagues have chosen extremely basic options. Betrusted’s keyboard is designed to be checked by simply holding it up to a light. Similarly, the LCD’s on-glass circuits are entirely constructed of transistors large enough to be inspected using a bright light and a USB microscope. As with open source software, that may not be something that everyone will do. But the fact that anyone could check the hardware in this way acts as a major disincentive to tampering with this equipment.

Huang sees his project as a beginning, not an end: “We think of Betrusted as more of a “hardware/software distro”, rather than as a product per se. We expect that it will be forked to fit the various specific needs and user scenarios of our diverse digital ecosystem.” That’s great news, because we need not just one such trustable device, but a whole array of them. They may not make much of a dent in a world of definitely untrustworthy Internet of Things systems, but it would be a start. Showing that trustable hardware systems can be built if we try hard enough would be a small but important victory for privacy.

Featured image by Chaos Computer Club and Bunnie Huang.