Court finds UK police use of facial recognition technology breaches privacy rights, data protection laws and equality laws
Automated facial recognition has emerged as one of the most problematic technologies for privacy. That’s reflected in the increasing number of posts on this blog dealing with the issues it raises. Of particular concern is police use. The UK has been in the vanguard here, as has the pushback by privacy campaigners. A year ago UK human rights campaigners described the increasing use of automated facial recognition systems by the police as putting “arsenic in the water of democracy“. One local police force in the UK, South Wales Police, is particularly keen on the technology. Privacy News Online wrote about its pioneering use of automated facial recognition to spot a man on a police watch list, as far back as 2017.
In May 2019, Ed Bridges from Wales began a crowdfunded legal action against the police force in South Wales for what he claimed was an unlawful violation of his privacy because he was subject to facial recognition scanning by the police. He said it breached data protection and equality laws. Initially, he was unsuccessful – the lower court threw out his claim for a judicial review. Bridges had said that automated facial recognition was not compatible with the right to respect for private life under Article 8 of the European Convention on Human rights (ECHR). But he then appealed to the UK’s Court of Appeal, on five grounds. The higher court has now ruled that three of those grounds are valid.
The first problem with the South Wales Police use of automated facial recognition is that there was no clear guidance where it could be used, or who could be put on the watch lists. The Court of Appeal said this failed to meet the standard required by Article 8 of the ECHR. The judges also agreed with a more technical point that the South Wales Police had failed to provide an adequate “data protection impact assessment“, as required under the UK’s Data Protection Act. The final reason the court agreed with the appeal is that the South Wales Police had failed to comply with something called the Public Sector Equality Duty (“PSED”):
The Court held that the purpose of the PSED was to ensure that public authorities give thought to whether a policy will have a discriminatory potential impact. [South Wales Police] erred by not taking reasonable steps to make enquiries about whether the [automated facial recognition] Locate software had bias on racial or sex grounds. The Court did note, however, that there was no clear evidence that [automated facial recognition] Locate software was in fact biased on the grounds of race and/or sex.
This is an important general point. It is now well recognized that AI systems are not automatically neutral, and may even be extremely discriminatory in the way they work. As the New York Times reported two years ago:
Research has shown that automated systems that are used to inform decisions about sentencing produce results that are biased against black people and that those used for selecting the targets of online advertising can discriminate based on race and gender.
That real risk of bias is problematic in the UK because of the PSED, which requires public authorities in the country to take proactive steps against precisely this problem – something that South Wales Police failed to carry out. In part, that is because it is currently hard to do, and more work is needed in order to come up with frameworks to allow AI-based systems to be checked readily for bias of any kind. At the moment, the black box approach to AI design makes it impossible to know how exactly these systems work – an unacceptable state of affairs for a technology that can have such a major impact on people’s lives, not least when deployed by police forces. The Appeal Court ruling now makes implementing ways of allowing routine scrutiny an urgent task.
The challenge to the use of automated facial recognition technology was brought on behalf of the Bridges by Liberty, a UK human rights organization. Liberty described the Appeal Court’s decision as “a huge step against [an] oppressive surveillance tool“. It is certainly a win, and one that the South Wales police has said it will not appeal against. However, it is by means the end of police use of facial recognition technologies in the UK. That will continue, but subject to the constraints and conditions introduced by the Court of Appeal’s ruling.
The judgment only applies to the UK, but the point raised about the need to ensure that black box systems do not hide biased rules is something that privacy activists in other jurisdictions are also raising with increasing urgency. It is part of a larger question about algorithmic transparency that is likely to be of relevance to every country. The UK judgment in itself is unlikely to change things much outside the UK, but does at least add to the pressure for AI systems to respect human rights such as privacy, and for that to be demonstrable through formal code inspection, not just vaguely assumed on the basis of manufacturer’s optimistic claims.
Featured image by South Wales Police.