China’s AI-based prisons – both indoors and outdoors – offer a warning of how privacy may die elsewhere

Posted on Apr 13, 2019 by Glyn Moody

Online freedom of speech is under attack around the world. The EU’s new Copyright Directive is about to become law, and brings with it a need to filter all uploads to most sites. Once filters are in place, it will be easy to use them for blocking things other than alleged copyright infringement. Australia has brought in a law to remove “abhorrent violent material“, the EU is working on something similar, and UK has recently announced plans to go even further with its vague “online harms” legislation. It intends to create a formal censorship system that would lay down for the country’s ISPs and online services which subjects are and are not permissible, and to force them to monitor all user communications in order to prevent any “illegal content” from appearing online. Cory Doctorow calls this the “Chinafication of the Internet“, since it represents the West taking some of the worst technological developments in China, and applying them to their own societies.

As previous Privacy News Online posts have described, China’s crackdown on freedom of speech has been accompanied by an equally brutal attack on privacy. Nowhere is the use of surveillance by the Chinese authorities more thoroughgoing than in the Western province of Xinjiang. An increasing number of detailed and credible reports have revealed how hundreds of thousands of China’s Turkic-speaking Uyghur population are being held in “re-education” centers. In truth, these are concentration camps where Uyghurs are indoctrinated and pressured to renounce their Muslim culture. Even outside the camps, cities in Xinjiang are effectively open-air prisons.

The high-tech nature of the control of Xinjiang is described in a new article from Logic magazine. At the heart of the surveillance lies the region’s “Integrated Joint Operations Platform” (IJOP). The system gathers information from multiple sources. One is CCTV cameras, some of which have facial recognition or infrared capabilities for night use. Cameras are positioned in locations that the authorities consider sensitive – for example, schools, entertainment venues, supermarkets, and homes of religious figures. Another source for the database is “wifi sniffers,” which collect the identifying MAC addresses of computers, smartphones, and other networked devices that are within range. The IJOP also receives information such as license plate numbers and citizen ID card numbers from the region’s many security checkpoints, and from “visitors’ management systems” in access-controlled communities. Mobile phones are constantly monitored using Jingwang software, which has to be installed on everyone’s phone in the region. AI plays an important role in IJOP:

programs automate the identification of Uyghur voice signatures, transcribe, and translate Uyghur spoken language, and scan digital communications, looking for suspect patterns of social relations, and flagging religious speech or a lack of fervor in using Mandarin. Deep-learning systems search in real time through video feeds capturing millions of faces, building an archive which can help identify suspicious behavior in order to predict who will become an “unsafe” actor. The predictions generated automatically by these “computer vision” technologies are triggered by dozens of actions, from dressing in an Islamic fashion to failing to attend or fully participate in nationalistic flag raising ceremonies. All of these systems are brought together in the IJOP, which is constantly learning from the behaviors of the Uyghurs it watches.

Those are clearly highly intrusive uses of AI for monitoring people. Moreover, they are likely to be inaccurate and to throw up large numbers of false positives. That’s true of all AI systems, but may be especially true in China, whose capabilities in this field are often overestimated by Western observers. But in the context of a repressive state system, making mistakes is hardly something the authorities will worry too much about. And China is already working on an even more oppressive form of total surveillance, which is being rolled out inside some of its prisons, as this news item in the South China Morning Post explains:

The new “smart jail” system involves a network of surveillance cameras and hidden sensors that reach out like “neuron fibres”, as one of the sources put it, through the compound with a blanket coverage extending into every cell.

The network collects and streams data to the “brain”, a fast, AI-powered computer that is able to recognise, track and monitor every inmate around the clock, without blinking.

At the end of each day, the system generates a comprehensive report, including behavioural analysis, on each prisoner using different AI functions such as facial identification and movement analysis.

The AI system can track up to 200 faces at a time, so it’s not possible for prisoners to gain a little privacy by mingling with prison crowds at meal times, say. That means they are likely to be under AI-based surveillance 24 hours a day. It’s like assigning a prison guard to stay with them wherever they go, with the difference that it will never be possible overpower, trick or bribe the non-human guard. A future development might be to marry this new machine-learning approach with the surveillance systems found in places like Xinjiang. It will soon be possible to use AI to make someone a prisoner in their own home, with every movement monitored, and every word analyzed.

The “success” of the repression of the Uyghurs is likely to be extended to the rest of China. According to the Wall Street Journal, the man behind the crackdown in Xinjiang is now “setting the tone for the country’s shift toward harsher, technology-driven authoritarian rule.” China is also starting to export its extreme surveillance technologies. An article in The Diplomat reports how governments in Mongolia, Ethiopia, Zimbabwe, Malaysia, Ecuador and others have bought China’s AI-based surveillance systems. Whether Huawei’s telecom equipment can be trusted has become major political issue, and is often in the headlines. But the less well-known Chinese company Hikvision is already selling its equipment for use in highly-sensitive contexts in the UK. Hikvision has been installing its cameras in Xinjiang since at least 2012. That’s a useful reminder that as the West starts using Chinese surveillance systems, it is not just a philosophy that it is importing, but possibly hostile surveillance of its own citizens too.

Featured image by Colegota.