The Hong Kong protests reveal how our faces are becoming a key battleground for privacy and freedom
The protests in Hong Kong are much in the news. But for readers of this blog, there’s a particular reason why they are of interest. Mainland China is well known for its advanced and pervasive surveillance systems, and Hong Kong naturally shares many of its approaches. Protesting in the region therefore requires new skills in order to circumvent attempts by the local government to clamp down on such actions. Some are quite low-tech, for example a unique system of hand signals, used to send messages through the crowd about what equipment is required. Others are born digital – like the app HKmap.live, which crowdsources the real-time locations of Hong Kong police vehicles, riot and special tactical police, and locations where tear gas has been fired. It worked so well that it’s just been removed by Apple following pressure by the Chinese government.
One of the striking features of the continuing protests in Hong Kong is that most of the people participating in them wear masks. That’s partly practical, because of the tear gas that is deployed routinely. However, most masks are made of cloth, and so won’t provide much protection from the gas. Wearing masks is mostly about anonymity, which is why the Hong Kong government’s recent ban on wearing face masks is significant. The first arrests have been made, but it’s not yet clear whether the authorities will be able to enforce the new law, and prevent thousands of people from wearing masks.
In any case, many other jurisdictions already ban masks during protests, so the issue of how to protect privacy in a world without masks is a wider problem that needs solving. One Hong Konger posted on Twitter what looked like an intriguing solution. The short video shows faces being projected onto another person’s face, creating ghostly double faces that would be hard for surveillance systems to read. It’s actually from 2017, and part of a larger project looking at surveillance and how to counter it. Other solutions include a scarf designed to confuse face detection systems, and a facial recognition distorting mask.
There is an obvious problem with fixing a small projector on top of you head: it marks you out as someone who is trying to avoid surveillance. That in itself is likely to make you an object of interest to the police. It’s an issue with other approaches that try to confuse facial recognition systems. For example CV Dazzle, which uses “avant-garde hairstyling” and makeup designs to “break apart the continuity of a face”. The idea behind the approach is that facial-recognition algorithms generally rely on the identification and spatial relationship of key facial features. They employ things like like symmetry and tonal contours to do that, and so detection can be blocked by creating an “anti-face” that disrupts the recognition process process. However, as images on the site show, anyone employing this approach is easy to pick out in a crowd – unless everyone sports “avant-garde hairstyling” – thus negating much of its benefit.
The follow-up project, HyperFace, works by adding more faces in the form of patterns on clothes – a technique also demonstrated in the 2017 video mentioned above. That’s slightly less noticeable, although police officers will soon learn to spot the characteristic designs. More sophisticated and considerably less obvious is a special hat that uses invisible projections to fool face recognition systems:
Using tiny infrared LEDs wired to a baseball cap, the researchers were able to project dots of light onto the wearer’s face in a way that not only can obscure their identity but also “impersonate a different person to pass facial recognition-based authentication.” This is a much more challenging goal, and it requires using a deep neural network to interpret a static image of the victim’s face and project the appropriate infrared lighting onto the impersonator.
Most subtle is the following idea based on wearing specially-designed eyeglasses:
We define and investigate a novel class of attacks: attacks that are physically realizable and inconspicuous, and allow an attacker to evade recognition or impersonate another individual. We develop a systematic method to automatically generate such attacks, which are realized through printing a pair of eyeglass frames. When worn by the attacker whose image is supplied to a state-of-the-art face-recognition algorithm, the eyeglasses allow her to evade being recognized or to impersonate another individual.
It seems extraordinary that 3D-printed glasses can not only evade facial recognition software, but even fool it into seeing a completely different face from the real one of the person wearing them. The paper was published in 2016, so it is possible that today’s improved AI-based systems could spot the trick, but it’s nonetheless an intriguing approach.
Finally, it’s worth emphasising that facial recognition can be turned against the authorities. As a New York Times article on the Hong Kong protests explains, as well as seeking to avoid identification by the police, protesters area also trying to dox police officers:
“Dadfindboy” – a play on the name of a Facebook group created under the auspices of helping mothers find their children, but which ultimately became a way for pro-government groups to gather photos of protesters – is one forum for the doxxing of police officers. By turns facetious, juvenile, cruel and profane in tone, the channel repeatedly reveals personal information and photos, some of them intimate, of the family members of police officers, sometimes with intimate social-media photos.”
The title of the New York Times article is “In Hong Kong Protests, Faces Become Weapons”. As surveillance based on facial recognition becomes increasingly common around the world, so faces will be not just weapons, but also a key, if unlikely, battlefield.
Featured image by Jing-cai Liu.