Clearview AI Offers to Eliminate Public Anonymity and Destroy Privacy around the World for a Mere $50 Million
PIA blog first wrote about the facial recognition start-up Clearview AI two years ago, when news about its huge database of three billion facial images appeared. Its main market is currently law enforcement, with which it has already had considerable success in the US. But two years is a long time in digital technology, and Clearview AI is moving forward. In October last year, its co-founder and CEO, Hoan Ton-That, told Wired that his company had now collected more than ten billion images. It was also working on new technologies, including “deblur” and “mask removal” tools. Around the same time, Clearview AI’s system was tested independently by the The National Institute of Standards and Technology, and fared “surprisingly well“. In January of this year, the company was awarded a US patent for its identification technology, specifically for its ability “to gather publicly available information from the open internet (social media sites, mugshots, news sites and more) and then accurately match similar photos using its proprietary facial recognition algorithm.”
That approach of scraping facial images from public sites is highly controversial, and has led to considerable pushback across the spectrum. Early on, Twitter, Google, Facebook, and LinkedIn all told the company to stop harvesting photos from their services. In June 2020, the European Data Protection Board (EDPB), an independent European body that contributes to the consistent application of data protection rules throughout the European Union, wrote:
Without prejudice to further analysis on the basis of additional elements provided, the EDPB is therefore of the opinion that the use of a service such as Clearview AI by law enforcement authorities in the European Union would, as it stands, likely not be consistent with the EU data protection regime.
That was an early indication that Clearview AI’s approach would run afoul of the EU’s privacy laws. As it turned out, other countries moved first: in February last year, Canada’s Office of the Privacy Commissioner said the company violated the country’s privacy laws. In November, Clearview AI was ordered to delete all facial recognition data referring to Australian citizens. Shortly afterwards, the UK’s Information Commissioner’s Office announced its “provisional intent to impose a potential fine of just over £17 million on Clearview AI”. The first GDPR ruling against Clearview AI came in December last year, from France. Other legal complaints against the company are still being dealt with in Austria, Italy and Greece. Meanwhile, in the US, Clearview AI is accused of violating the biometric data privacy law in Illinois, and faces class action lawsuits in New York and California. US politicians have also called for the Department of Homeland Security to stop using Clearview AI’s product. One of the most dramatic recent developments in the Clearview AI story is a report in the Washington Post, which appeared last week:
The facial recognition company Clearview AI is telling investors it is on track to have 100 billion facial photos in its database within a year, enough to ensure “almost everyone in the world will be identifiable,” according to a financial presentation from December obtained by The Washington Post.
And the company wants to expand beyond scanning faces for the police, saying in the presentation that it could monitor “gig economy” workers and is researching a number of new technologies that could identify someone based on how they walk, detect their location from a photo or scan their fingerprints from afar.
The story is based on a 55-page “pitch deck”. In it, Clearview AI claims that with $50 million from investors — a surprisingly small sum compared to the valuations of other digital companies — it could grow its database to encompass almost everyone on the planet, or at least everyone for whom there is a digital photo. In addition, it would aim to expand its international sales team and spend more on lobbying government policymakers to “develop favorable regulation”. Although that is unlikely to apply in the EU, where the GDPR is the cornerstone of the region’s privacy laws, many other parts of the world lack similar legislation. Clearview AI doubtless sees opportunities to persuade lawmakers there to pass data protection laws that are compatible with its service. Significantly, the deck shows that Clearview AI has ambitions beyond law enforcement:
the presentation shows the company has based its “product expansion plan” on boosting corporate sales, from financial services and the gig economy to commercial real estate. On a slide devoted to its “total addressable market,” government and defense contracts are shown as a small fraction of potential revenue, with other possible sources including in banking, retail and e-commerce.
It’s important to note that this is just a pitch to investors, and Clearview AI’s plans to build a monster facial database may never be realized. But the prospect alone is deeply troubling: if ever constructed, it would spell the end of public anonymity. More or less anyone, anywhere, could be identified when out in the open, with all the implications that has for basic human rights. In effect, privacy would no longer exist in public spaces. It would also be seriously compromised in private ones, since increasing numbers of “smart” devices come with cameras that could be used to monitor who is doing what in offices, in shops, and in the home.
Even if Clearview AI fails to obtain the investment it seeks, it is quite likely that others — notably intelligence services of major powers — are already building such a total database, just rather more discreetly than Clearview AI. Once digital images exist of most people on the planet, the temptation to harvest as many of them as possible, and to use them for total surveillance, will be extremely strong. Enjoy your public anonymity while you can.
Featured image by Ivo Kruusamägi.