Should Facial Recognition Technologies Be Regulated by the Government? Microsoft Says ‘Yes’

Posted on Jul 21, 2018 by Glyn Moody
Share Tweet Plus



Facial recognition technology represents one of the most serious threats to privacy. That’s for two principal reasons. Perhaps the most important is that it is almost impossible to change our faces: serious plastic surgery apart, there are few effective techniques to disguise our bodily appearance. Masks may hide our features, but are too cumbersome – and too obvious – to use on a routine basis. The other reason is that facial recognition technologies are improving rapidly, and are likely to continue to do so thanks to advances in both hardware and software.

These developments imply that low-cost, almost-perfect facial recognition systems will soon be routinely available to both governments and companies everywhere. The death of public anonymity is a real prospect. The problem is clear enough, but what might be done about it, isn’t so evident. In a rather unexpected intervention, Brad Smith, Microsoft’s president and chief counsel, has just offered his own solution. Since it appears as a 3,500-word essay on the main Microsoft blog, it presumably represents the company’s official position. Here’s what Smith – and thus his employer – suggests:

The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself. And if there are concerns about how a technology will be deployed more broadly across society, the only way to regulate this broad use is for the government to do so. This in fact is what we believe is needed today – a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.

That is remarkable, given Microsoft’s fraught history with government interventions. Particularly since the pivotal anti-trust case brought by the US government against the company in 2001, Microsoft has generally been arguing against any kind of government interference in its markets. The fact that it is not just acquiescing in moves to regulate facial recognition, but taking the lead in actively calling for them, is noteworthy. It reflects both the seriousness of the problem, and Microsoft’s changed situation and culture.

In his essay, Smith notes obvious concerns about governments using large-scale facial recognition at rallies to populate databases that record people’s views, or companies tracking everything visitors do inside stores, and sharing information with each other about what customers browse and buy. He also notes the paradox that even as facial recognition technology advances, in some respects it remains limited by built-in biases: today’s systems work more accurately for white men than for white women, and are more accurate in identifying persons with lighter complexions than for people of color. Those are well known issues. More interesting is his explanation of why Microsoft has taken the unusual step of calling for government intervention:

As the country was transfixed by the controversy surrounding the separation of immigrant children from their families at the southern border, a tweet about a marketing blog Microsoft published in January quickly blew up on social media and sparked vigorous debate. The blog had discussed a contract with the U.S. Immigration and Customs Enforcement, or ICE, and said that Microsoft had passed a high security threshold; it included a sentence about the potential for ICE to use facial recognition.

Smith writes that the contract mentioned in that marketing blog post did not involve facial recognition technologies. But he and his colleagues are acutely aware that public concern about the use and abuse of facial recognition technology is not going away, and have evidently decided to address it sooner rather than later. Smith notes that some are calling for companies to self-regulate, but he cautions: “As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.” He then offers a defense of government regulation in general:

there are many markets where thoughtful regulation contributes to a healthier dynamic for consumers and producers alike. The auto industry spent decades in the 20th century resisting calls for regulation, but today there is broad appreciation of the essential role that regulations have played in ensuring ubiquitous seat belts and air bags and greater fuel efficiency. The same is true for air safety, foods and pharmaceutical products. There will always be debates about the details, and the details matter greatly. But a world with vigorous regulation of products that are useful but potentially troubling is better than a world devoid of legal standards.

Smith points out that recently Microsoft has been a vocal supporter of the General Data Protection Regulation in the EU. The company’s call for government regulation of facial recognition technologies can be seen as an outgrowth of its early interest in privacy protection. It’s an approach that has clear reputational benefits for Microsoft, which in the past has been seen by many as high-handed and acting as if it were above the law.

It’s a shrewd move for another reason. As Privacy News Online has reported, Amazon has come under fire for the roll-out of its cloud-based facial recognition services. Microsoft has less to fear from regulations, since it is lagging somewhat in selling similar systems. By positioning itself as the “ethical” supplier, it can hope to win business from governments and companies concerned about the privacy implications. New regulations will probably be more of a hindrance for Microsoft’s rivals in this field, such as Amazon and Google, that are further along in placing facial recognition at the heart of future services.

Alongside its call for a bipartisan expert commission to assess the best way to regulate the use of facial recognition technology, Smith’s post also suggests some specific questions that must be answered. For example, what kind of oversight is needed for police and government use of facial recognition technologies? Should the use of unaided facial recognition technology as evidence of an individual’s guilt or innocence of a crime be allowed? Does the use of facial recognition by public authorities or others require minimum performance levels for accuracy? What about the rights of individuals to know that facial recognition is being used in public or commercial spaces? What redress mechanisms should there be for individuals who believe they have been misidentified by a facial recognition system?

Smith also offers his thoughts on the specific responsibilities of the tech sector. He says more work must be done to reduce the risk of bias in facial recognition technology. Smith promises that Microsoft will create and publish a transparent set of principles for facial recognition technology. He says that to craft these principles, Microsoft will draw on the expertise and input of its employees, but will also seek the views of external stakeholders, including customers, academics and human rights and privacy groups. Smith pledges that Microsoft will move “more slowly” in its deployment of advanced facial recognition technology. That might seem a strange promise to make, but he goes on to explain:

Many information technologies, unlike something like pharmaceutical products, are distributed quickly and broadly to accelerate the pace of innovation and usage. “Move fast and break things” became something of a mantra in Silicon Valley earlier this decade. But if we move too fast with facial recognition, we may find that people’s fundamental rights are being broken.

“Move fast and break things” was Facebook’s motto until 2014, and this promise to move more slowly is clearly another dig at competitors that are, Microsoft implies, moving too fast, with serious adverse consequences for the public.

Despite the blatant attempt to weaponize privacy for its own benefit, Microsoft’s call for government regulation of facial recognition systems is nonetheless an important development. It will place pressure on the other major players in this sector to acknowledge the importance of protecting privacy in their rush to exploit the technology. It will also strengthen efforts around the world to draw up sensible legal frameworks for obtaining the benefits of facial recognition systems while avoiding their worst problems.

Featured image by Steve Jurvetson.

About Glyn Moody

Glyn Moody is a freelance journalist who writes and speaks about privacy, surveillance, digital rights, open source, copyright, patents and general policy issues involving digital technology. He started covering the business use of the Internet in 1994, and wrote the first mainstream feature about Linux, which appeared in Wired in August 1997. His book, "Rebel Code," is the first and only detailed history of the rise of open source, while his subsequent work, "The Digital Code of Life," explores bioinformatics - the intersection of computing with genomics.

VPN Service