How to Hide from Facial Recognition Software with Fawkes

Posted on Jan 3, 2022 by David Rutland

I recently wrote about constructing a fake persona to hide your identity online. I’ll admit that for most people, it’s a lot of trouble to go through. In most cases, it’s completely over the top. Creating a fake person from scratch in order to preserve your privacy is the domain of the paranoid.

You want to be able to live a normal life — to hang out on social media with your friends and family and occasionally post photos of trips to the beach, social gatherings (when they resume), and other life events — without the images being scraped and added to a facial recognition database.

It’s a real problem, and since social media came into existence, service providers and other shady third parties have been using your data any way they please.

That said, because facial recognition features are so easy to use, many people want to use them. Being able to easily find photos of specific individuals from your tens of thousands of snaps is kind of useful — although you should probably consider how often you actually use this feature.

Facial Recognition Is a Risk to You

The potential benefits of automatic recognition from a snapshot or video feed are limited: You can unlock the front door to your house, use your face to pay for your subway ride in a number of cities, and easily find images of yourself in galleries and albums. 

And that’s basically it. Facial recognition technologies don’t treat you as the user — you are just a data subject, and the real advantages are for organizations that sell access to services that can pinpoint a person from an image snatched on CCTV.

And while facial recognition may not allow you to purchase goods in stores, that doesn’t mean that the stores aren’t using it for their own ends.

Loyalty programs, personalized marketing, and in-store security all use facial recognition in order to get you to spend more, and ensure that you aren’t stealing the stock.

From the point of view of a retailer, it’s highly desirable to recognize known or suspected shoplifters as soon as they sidle through the sliding doors, but for everyone else, it’s intrusive and an invasion of privacy.

And if you think masks will work as a way of keeping your mugshot off the database, you can think again. As early as May 2020, retail oriented security-as-a-service providers, such as Facewatch, had updated their algorithms to recognize individuals wearing Covid face coverings, and they are currently working to “support persons adhering to religious customs such as niqabs to be provided an equal user experience when engaging with identification technology.”

Even if you don’t mind personalized advertising, and would never dream of lifting a can of baked beans without paying the cashier, that doesn’t mean that facial recognition technology is in any way ethical, or that the organizations that use it are acting in your best interests.

The United States has a history of protests, demonstrations, and on-the-ground political action. It’s a proud tradition that people will revolt against injustice, demonstrate for their democratic rights, and organize to fight the power.

Such demonstrations are pitted against some aspects of the establishment and the status quo. Naturally, the establishment pushes back with a mind-boggling array of technological toys such as drones, tanks, tear gas, and of course, facial recognition.

On June 14th 2020, Derrick Ingram, co-founder of the non-violent activist group Warriors In The Garden, was involved in an incident with an officer of the NYPD, during which Ingram was alleged to have used a megaphone to shout into the ear of the officer. Flash forward two months, and officers in riot gear, supported by police helicopters, turned up at Ingram’s apartment to arrest him.

Ingram had been identified using facial recognition of an image captured at the protest that matched with one on his personal Instagram account.

Ingram’s story is far from unique. Clearview AI — a private company that provides facial recognition services to both private and public organizations — lists among its customers: ICE, the Department of Justice, FBI, Customs and Border Protection (CBP), Interpol, and more than 600 law enforcement agencies. As of February 2020, NYPD officers had run more than 11,000 facial recognition searches.

There’s an argument that Derek Ingram shouting into an officer’s ear-hole is technically assault, and the individuals who were identified by facial recognition after the January 6th 2021 Congress incident, were at least trespassing.

But as an exceptionally law abiding citizen, you have nothing to worry about. Right?

Wrong.

Laws change almost every day. And what is permitted one day may be outlawed the next. A perfectly legal activity can be criminalized by federal and state lawmakers with the stroke of a pen.

Texas State Senate Bill 8, which was signed by Texas Governor on 19 May and enacted September 1st 2021, effectively bans abortion as early as six weeks of pregnancy. The bill was condemned by UN human rights experts, who said, “This law is alarming. It bans abortion before many women even know they are pregnant.”

The bill prevents state officials from enforcing the law, but it authorizes private individuals to sue anyone who performs or assists a post-heartbeat abortion.

Although I haven’t yet heard of facial recognition tech being used to identify staff at abortion clinics (or the women who use them), it doesn’t seem far-fetched to say that it could be used to enforce gray-area laws.

You Can’t Keep Your Face Out of Recognition Databases

Since the dawn of social media and cloud-photo storage, billions of people have taken advantage of ‘free’ services that can store, catalogue, and tag their images — unaware that the services were being used to create the datasets that make facial recognition possible.

In November 2021, Meta (the company formerly known as Facebook) announced that it would be shutting down the Face Recognition system on Facebook, stating, “There are many concerns about the place of facial recognition technology in society, and regulators are still in the process of providing a clear set of rules governing its use. Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate.”

The Meta statement does not rule out using facial recognition altogether, and on the surface, it seems to only make the experience more frustrating for end users.

But Facebook was only ever a part of the problem. Clearview AI, the company so beloved by law enforcement agencies across the world, has historically scraped social media and other sites for faces and corroborative information rather than collaborating with the companies hosting the pictures. It is estimated that Clearview AI downloaded over 3 billion photos and used them to create facial recognition models of millions of citizens.

You’re going to have to accept that your pictures are already out there, that they’re tied to your identity, and that there is no way to change that. Sure, you can keep new images of yourself off the web, but that doesn’t change the fact that one or more privately owned machine learning tools can recognize you more accurately than a person can.

Use Fawkes to Fool an AI

Facial recognition tools can recognize you because they have a vast trove of data to analyze and discover the unique combination of features that make up your face. The fewer images available, the worse these tools will perform.

Another way to fool the systems is to feed them images that poison the model they use, making it less likely that they will be able to recognize you from a fresh snapshot or video feed.

While you can fill your social media feeds with images of people who aren’t you, this rather defeats the object of having social media in the first place. A dozen pictures of Rick Astley in place of your own handsome mug at bachelor parties and beach vacations will just annoy people.

Ideally, you want images that can be easily recognized by other people but will fool the machines and throw off their recognition algorithms.

I have a profile picture on this site. It’s the same image in the PIA slack channel, and the same one I use elsewhere. I’m quite happy for it to be associated with my name and my identity. People I know can tell that it’s me, but thanks to a process called ‘cloaking’, which involves making tiny, pixel-level changes that are invisible to the human eye, the models on which facial recognition rely are fed inaccurate information.

Every new photo of me that appears online goes through the same process — meaning that as time goes on, the ability of companies and law enforcement to recognize my image diminishes.

The software responsible for this trickery is called Fawkes, and it was developed by the SAND Lab (Security, Algorithms, Networking and Data) at the University of Chicago to combat the ubiquity of facial recognition systems. The name is a deliberate pop culture reference, referring to to the Guy Fawkes mask worn by the protagonist of V for Vendetta, and ideally the software should afford users some degree of anonymity.

After images have been altered by Fawkes, the inventors say that:

“You can then use these ‘cloaked’ photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, ‘cloaked’ images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable by humans or machines and will not cause errors in model training. However, when someone tries to identify you by presenting an unaltered, “uncloaked” image of you (e.g. a photo taken in public) to the model, the model will fail to recognize you.”

It’s that easy. The only issue is how many cloaked images it will take to thoroughly poison a model.

Run Your Own Fawkes

Fawkes is open-source software, meaning that you can alter, distribute, and run it however you want. And while you can build it from source, the developers have provided ready-to-run binaries for Windows, Mac, and Linux. In case you’re on a limited bandwidth connection, you should be aware that the download is almost 1 GB.

To give you an idea of the results, I generated a convincing face using thispersondoesnotexist.com, then ran it through Fawkes. After around 30 seconds, the cloaking process was complete. These are the results:

Two visually identical images of a woman

To me, both images are identical. If this person did exist, the people who knew her may be able to pick up some differences; but to most people, these would appear to be identical images.

To a facial recognition engine that already has a model built on thousands of images of this person, it is close enough to add to the model but different enough to fundamentally alter the basis of the model.

A few hundred (or thousands) of Fawkes-processed images will see the model thoroughly poisoned.

Limitations of Fawkes

Think of how many pictures of you are already out there on the internet. There are probably more than a few dozen in existence, and they are accessible to Google, Facebook, or other third-party scrapers. This means that it will take a lot of Fawkes images to make you invisible to the software models that will be developed in the future. For models that are already in existence and no longer updating, it won’t be effective at all.

Using Fawkes also means that you will need to make changes to the way you upload photos. Many people have their photos set to upload to the cloud as soon as they are taken. If you plan on using Fawkes, you will instead need to upload photos from your phone onto your PC, run Fawkes (at around 30 seconds per photo), and then upload the images to your social media accounts or cloud storage provider of choice to be tagged.

You will also need to persuade your friends, family, or anyone else likely to tag pictures of you to do the same. But this may prove… more difficult.

There is also the unfortunate fact that facial recognition platforms evolve. They get better at what they do, and there are suspicions that certain vendors are working against Fawkes to make it less effective as a cloaking tool.

In January 2021, the University of Chicago revealed that a significant change made to Microsoft Azure tools indicated that, “Azure has been trained to lower the efficacy of the specific version of Fawkes that has been released in the wild.”

The current version of Fawkes (v 1.0) is no longer vulnerable to the Azure update, however there is no guarantee that upgrades on any of the facial recognition platforms could render Fawkes’s protection less effective.

Can You Really Defeat Facial Recognition Software with Fawkes?

The answer is probably. Facial recognition makes a lot of money for the companies involved in it — Clearview AI, for instance, was estimated to be worth $100 million in 2020, and despite recent eight figure fines for data protection violations, the company, and others like it, are unlikely to limit their activities so long as governments and police departments keep paying them. Their models will evolve, and Fawkes will evolve to keep ahead.

Currently, Fawkes is reckoned to be “100% effective against state-of-the-art facial recognition models (Microsoft Azure Face API, Amazon Rekognition, and Face++).”

Yes. Using Fawkes to keep an accurate model of your face out of their hands is a hassle, and it ultimately comes down to how much you value your privacy.

I’ve been using Fawkes for almost a year; every photo taken by anyone in my household is run through it, and I have upgraded with every new release.

For me, the peace of mind is worth the extra trouble. Plus, I’ve automated the process.

So take the time and do your own risk evaluation. How much do you value your privacy and anonymity?

Comments are closed.

2 Comments

  1. William Null

    Why do you even have comment form, if you are not going to publish any?

    2 years ago
    1. David Rutland

      Hey, you’re the first, Will. Comment away, and as long as it’s more-or-less on topic and not abusive, we’ll publish it

      2 years ago