Police Are Using AI for Mass Surveillance in the US and in Europe

Posted on May 31, 2023 by Glyn Moody

There is a tension in the world of privacy. On the one hand, governments are using intelligence services and police forces to say we need more surveillance to better protect the public. On the other hand, many are concerned that such tools will be used to control society through 24/7 monitoring of activities currently deemed legal.

In order to overcome our fears and suspicions regarding more surveillance, authorities have become adept at taking advantage of special circumstances to roll out more monitoring, which then becomes permanent. A good example of this is happening right now, in the shape of AI-based surveillance of major public events.

AI Surveillance Is Growing in the US and in the EU

Law enforcement makes the argument that the exceptional nature of events with tens or even hundreds of thousands of people present requires exceptional responses – in the form of advanced technology capable of scanning and spotting threats among crowds. Here’s what happened in the US earlier this year, when Phoenix was hosting Super Bowl LVII, according to a feature in The Atlantic:

In preparation for the game, the local authorities upgraded a network of cameras around the city’s downtown – and have kept them running after the spectators have left. A spokesperson for the Phoenix Police Department would not confirm the exact type of the cameras installed, but ABC15 footage shows that they are a model manufactured by Axis Communications with enough zooming capability to produce a close-up portrait of any passerby from an extended distance, even when it’s completely dark out.

Although the Phoenix police insisted that the upgraded cameras did not involve facial recognition capabilities, The Atlantic article points out that the manufacturer’s web site confirms that the cameras are, in fact, embedded with an “AI-based object detection and classification” system.

On the other side of the Atlantic, the 2024 Paris Olympics will see another application of AI surveillance, and on a massive scale. Politico reports that the French government wants to use AI-powered real-time camera surveillance systems to spot “suspicious behavior, including unsupervised luggage and triggering alarms to warn of crowd movements like stampedes”.

Given the privacy implications, a special law had to be passed by the French parliament to implement this technology, and a challenge at the country’s top constitutional court failed to stop the plans. The judges wrote that deploying the technology would not infringe upon people’s right to privacy because “the development, implementation and possible evolution of algorithmic processing” remains under human control.

However, around 40 Members of the European Parliament warned of the dangers of adopting this approach:

this measure threatens the very essence of the right to privacy, data protection and freedom of expression, making it contrary to international and European human rights law. In line with democratic values and principles, during large-scale events such as the Olympic Games, which thousands of EU citizens are expected to attend, it is essential to ensure that fundamental rights are fully protected and that conditions are created for public debate, including political expression in public spaces.

The UK Is a Leader in AI-Based Surveillance

The UK government has no qualms about deploying AI-based surveillance for big events. During the coronation of King Charles III, London’s Metropolitan police carried out one of its largest deployments of real-time facial recognition. Figures released afterwards show the scale of the operation and underlines how disproportionate it was. Over 80,000 faces were scanned during the event, producing two alerts and just one arrest.

A few weeks later, UK police used facial recognition technology at a Beyoncé concert in Wales, where there were an estimated 60,000 people. Madeleine Stone, the Legal and Policy Officer of the UK non-profit privacy group Big Brother Watch, pointed out that live facial recognition was not mentioned in any UK law, had never been debated by politicians, and was “one of the most privacy-intrusive technologies ever used in British policing”.

Despite this, the UK government is keen to roll out real-time AI-based facial recognition technology more widely. That includes the body-worn cameras used by police as they patrol the streets.

However, Professor Fraser Sampson, who is the UK’s Biometrics and Surveillance Camera Commissioner, the subject of a PIA post last year, is troubled by the move. He is quoted by The Guardian as saying:

A camera on an officer walking down the street could check the faces against a watchlist of suspects. They could check hundreds if not thousands of people while on duty.

The technology will be capable of doing many things, not all of which the public would want. In China the algorithm can pick up ethnicity.

It will be able to estimate age; some manufacturers claim it can estimate someone’s mood or state of anxiety.

Sampson said “the ability of the state to watch every move, is real and needs to be addressed in any future regulatory framework about the state’s use of this technology”. Although such frameworks are likely to be drawn up in due course, the danger is that by then it will be too late.

Using major sporting and public events as test beds for new AI-based surveillance technology is a clever strategy. It is hard to argue that extra measures should not be taken to protect extra-large gatherings of the public. At the same time, once such monitoring has been used, it is harder to argue against its deployment in other contexts, or even routinely, as the UK government would like.

These developments in mass AI-based surveillance prove yet again that the fight for privacy, like the fight for freedom, is never just one and done. Privacy is an endless series of battles we all need to fight, as its outcome will impact everyone for generations to come.

Featured image by Big Brother Watch.