Powerful and pervasive artificial intelligence is coming: now is the time to talk about its impact on privacy

Posted on Oct 18, 2017 by Glyn Moody

Artificial intelligence (AI) is rather like the GNU/Linux desktop: every year is the one when it will finally take off. Indeed, this has been true for AI far longer than for the GNU/Linux desktop, since it is generally held that AI as a discipline was born back in 1956, whereas the GNU project only started in 1983. But even if it’s best to be wary of claims that AI has definitely arrived this time, there are couple of straws in the wind that suggest, at the very least, it is undergoing an important step change.

Two factors are driving today’s acceleration: powerful hardware, and lots of money. On the hardware side, there is the often overlooked fact that even low-cost smartphones are more powerful than top supercomputers of just a few decades ago. Since basic smartphone features like phone calls and online activities require very little of that power, there’s plenty left to dedicate to advanced AI-type features.

A more recent hardware development is the emergence of specialized AI-optimized chips: Apple has the Neural Engine, part of its A11 Bionic chip, while Intel says it will be shipping its new Nervana Neural Network Processor family of chips designed for machine-learning applications by the end of this year. The Huawei Mate 10, launched this week, also includes a neural network processing unit (NPU), specifically designed for using neural networks, probably the hottest AI technology currently. As a review of the Huawei Mate 10 that appeared in Ars Technica pointed out, the NPU is being used in a novel way:

“Huawei isn’t necessarily focused on building an AI assistant that users can directly interact with (as they would a human assistant). Instead, the company appears to be leveraging its AI hardware and software to make its smartphones more powerful and smarter over time.”

That is, Huawei is not simply using the NPU in order to boost the capabilities of a voice-activated assistant, a fairly conventional application of AI these days, or to optimize the quality of pictures taken with its camera – another popular approach. Instead, it aims to use AI to learn about the way a user interacts with the phone and its apps, and modify the latter adaptively to make the device more useful.

That’s a significant shift, because it elevates AI from a lower-level technology that can be embedded in sub-systems in order to create some clever apps – like voice-activated assistants, or intelligent cameras options – to an upper-level and active process that constantly changes many aspects of the entire system itself. In other words, this more advanced form of AI creates a malleable, evolving product that responds to and customizes itself according to the needs and wishes of individuals.

The second factor driving rapid change in the world of AI is money. Investors are beginning to pile in to the sector in the hope that they can make a killing when successful start-ups file for an IPO or are bought by richer companies looking to ramp up their AI capabilities quickly. Although money has been moving into AI for a while, recent news underlines just how much will soon be flowing into this sector. Masayoshi Son, the head of the Japanese conglomerate SoftBank, has revealed a few details about how he intends to invest money from his new $100 billion Vision Fund – yes, that’s $100,000,000,000 – which SoftBank unveiled last October with money from Saudi Arabia and others. The New York Times reports:

“The Japanese billionaire said he believed robots would inexorably change the work force and machines would become more intelligent than people, an event referred to as the “Singularity.” As a result … he is on a mission to own pieces of all the companies that may underpin the global shifts brought on by artificial intelligence to transportation, food, work, medicine and finance.”

This is not about applying a little AI cleverness to a product or service: it is about re-inventing entire sectors in the light of what large-scale AI will bring with it. Given the scale of the transformation involved, it would be surprising if privacy, too, were not greatly affected. Privacy News Online has already pointed out a number of ways in which AI is starting to have effects – often adverse ones – on our privacy. But the problem is that, while technology and investments charge ahead, very little is being done to examine the implications for digital privacy in a world where AI is powerful and pervasive.

An exception is work from Privacy International (PI), in the form of a response to an inquiry about AI carried out by a specialist group within the UK’s Parliament. Although the word “privacy” occurs nine times in the 77-page document published by the committee, the references are depressingly superficial, and there is no attempt to explore the complex privacy issues that AI raises. Privacy International’s submission is more concrete. It singles out four specific problems for privacy that the widespread use of AI will bring:

“we are primarily concerned about current and future applications of AI that are designed for the following purposes: (1) to identify and track individuals; (2) to predict or evaluate individuals or groups and their behaviour; (3) to automatically make or feed into consequential decisions about people or their environment; and (4) to generate, collect and share data.”

PI’s submission notes that it is often very hard to challenge or correct inaccurately inferred or predicted information. The workings of AI systems are rarely made public, and even if they were, it might not be possible to analyze them in any meaningful way. Somewhat optimistically, PI writes: “Black boxing should not be permissible wherever AI systems are used to make or inform consequential decisions about individuals or their environment.” It suggests:

“Individuals should be provided with sufficient information to enable them to fully comprehend the scope, nature, and application of AI, in particular with regards to what kinds of data these systems generate, collect, process, and share. When AI algorithms are used to generate insights or make decisions about individuals, users as well as regulators should be able to determine how a decision has been made, and whether the regular use of these systems violates existing laws, particularly regarding discrimination, privacy, and data protection. Governments and corporations who rely on AI should publish, at a very minimum, aggregate information of the kind of systems being developed and deployed.”

PI’s document makes the important point that AI’s benefits and harms are currently distributed unequally:

“Industry gains most from AI, with large tech companies (and selected government agencies) having unprecedented access to vast troves of data on billions of people around the world. Consumers and citizens are frequently unaware about the scope, granularity, and sensitivity of data that third parties hold about them, or that their data is being used to train and develop AI systems.”

Similarly, it points out that badly-designed AI applications can often disproportionately affect the most vulnerable in society, notably those from poorer backgrounds. Finally, it warns that AI systems can perpetuate existing injustices and inequalities in society through bias and discrimination that are built invisibly into the system, often completely unconsciously.

These are all hard issues, and finding ways to mitigate the problems will take considerable effort. However, before that can happen we need to initiate a broad-based public discussion about how AI could or should affect our future lives in general, and our privacy in particular.

Featured image by Cryteria.