What happens to identity and privacy when every biometric can be faked?

Posted on Jan 5, 2019 by Glyn Moody
Share Tweet

Identity and privacy are closely bound up. Typically, you use proof of your identity to access your private information. Alongside traditional approaches like passwords and hardware tokens, biometrics are increasingly employed to authenticate people, notably with smartphones, many of which now come with fingerprint sensors and facial recognition built in as standard. As well as convenience, this seems to be driven in part by a somewhat naive view that our biometrics are unique and immune to attack. So what happens to identity and privacy when it becomes easier to fake just about any biometric? We’re about to find out.

For example, it turns out that there are special “master fingerprints” that can match a large number of real fingerprints because of key features they possess. These may be natural, obtained by searching through fingerprint databases, or created artificially. A recent academic paper described a technique for producing synthetic master fingerprints that possess the additional property of looking like real fingerprints to the untrained eye – not the case for previous examples.

With a fingerprint recognition system that had a 1 in 1000 chance of making a false match, the synthetic master fingerprints were able to fool the checks 23% of the time. Less-stringent recognition systems with a 1 in 100 error rate, which may often be the case in real-life working environments, were tricked 77% of the time. The fact that these master fingerprints also looked realistic means that it might be possible to apply them as 3D-printed films to fingers for practical attempts to pass fingerprint checks.

Facial-recognition systems are also under attack. The Consumer Association in the Netherlands found that 42 out of the 110 smartphones it tested could be unlocked with just a high-quality photo of the owner. A recent article in TechCrunch describes a more sophisticated approach – using a 3D printer to create a model of the entire head:

I was ushered into a dome-like studio containing 50 cameras. Together, they combine to take a single shot that makes up a full 3D image. That image is then loaded up in editing software, where any errors can be ironed out. I, for instance, had a missing piece of nose.

Backface then constructs the model with a 3D printer that builds up layers of a British gypsum powder. Some final touch-ups and colourings are added, and the life size head is ready within a few days, all for just over £300 [about $375].

All four Android smartphones tested failed to spot the difference between the real person and the 3D-printed model. The face-recognition systems used by Apple and Microsoft, by contrast, were not fooled. However, the model was fairly simple, so more time and effort might have resulted in successful circumvention for these systems too. Moreover, as technology advances, it is probable that such 3D-printed models will become increasingly lifelike, and thus hard to detect.

Another biometric that is used to identify people is the human voice. Here’s a story in The Telegraph about someone putting together a piece of kit that successfully allowed impersonation:

The machine, known as a Semi-automatic Social Engineering Bank Telephone Machine, allowed Muldowney-Colston to alter his voice to pretend to be someone of any age or gender.

This allowed the 53-year-old to impersonate genuine customers when he spoke to banks. The machine also played pre-recorded bank messages in a bid to trick unsuspecting victims.

The [London] Met Police said the machine was used in a scam that conned hundreds of people out money.

That sounds like a fairly crude approach, and probably not one that would have beaten rigorous voice biometric systems. But research from last year suggested the real threat here might not be technology, but people who are skilled in modifying their voices in order to impersonate others.

Iris biometrics are not the answer, either. A report from 2017 showed how easy it is to fool smartphone iris-recognition systems using a normal digital camera surreptitiously to take an infra-red shot of the phone user’s eyes, from a moderate distance. The basic problem is that our eyes are generally visible whenever we are out in public, which makes obtaining images of the irises a surprisingly straightforward task. As the technical features of digital cameras improve, so do the quality of the images obtained, and with them, the ease of faking iris biometrics.

Perhaps new approaches are need – like vein authentication. The idea is that the unique shape, size, and position of users’ veins under the skin of their hand can be used to identify them. However, this system too has recently been defeated, again using a standard digital camera. The camera is modified slightly to remove the infra-red filter, which allows the veins to be seen more easily. From these images, researchers were able to build a wax model of a hand including vein details that was good enough to fool vein-authentication systems. It might be argued that it is unlikely anyone will go to all this trouble to circumvent this biometric. But the Motherboard article reporting on the research notes that vein authentication is being used by Germany’s equivalent of the NSA, the BND: gaining unauthorized access to its buildings would certainly be worth the effort.

Finally, on a related note, here’s some amazing work that uses machine learning to generate photo-realistic images of novel human faces by drawing on a library of real individuals. There’s a fascinating video that shows how countless faces were created by analyzing 30,000 celebrity photographs. Machine-learning software was able to extract rules for generating images that mimic the appearance of real photos. By varying a number of abstract parameters, a wide range of realistic but artificial human faces were produced, a small selection of which appears at the start of this post.

Although not directly applicable to the issue of identity and authentication, this work does indicate how AI techniques and powerful hardware allow biometrics to be analysed and then applied in new ways. It seems likely that this will lead to new challenges for privacy in the not-too-distant future.

Featured image from Nvidia.

About Glyn Moody

Glyn Moody is a freelance journalist who writes and speaks about privacy, surveillance, digital rights, open source, copyright, patents and general policy issues involving digital technology. He started covering the business use of the Internet in 1994, and wrote the first mainstream feature about Linux, which appeared in Wired in August 1997. His book, "Rebel Code," is the first and only detailed history of the rise of open source, while his subsequent work, "The Digital Code of Life," explores bioinformatics - the intersection of computing with genomics.

VPN Service