How putting artificial intelligence in Google Glass-like systems could both help and harm our privacy

Posted on Sep 1, 2018 by Glyn Moody

Remember Google Glass? Five years ago, it was the hot new accessory for those who wanted to live at the bleeding edge of technology. But once Google Glasses started being used in public, people realized that they represented a massive intrusion into the private lives of everyone nearby. As Wikipedia puts it: “The headset received a great deal of criticism and legislative action due to privacy and safety concerns.”

In 2015, Google announced that it would stop selling the Google Glass prototype to “Glass Explorers”. Since then, the idea of a head-mounted device with built-in first-person camera has faded from public view, but the idea certainly hasn’t gone away. Many may be surprised to learn that Google quietly re-launched Google Glass back in 2017, this time in the form of the Glass Enterprise edition. At that time, The Verge reported:

The major upgrades between the original Glass and the enterprise version are a better camera (with resolution upgraded from 5 megapixels to 8), extended battery life, faster Wi-Fi and processor, and a new red light that turns on when recording video. The electronics of Glass have also been made modular in the shape of a so-called Glass Pod, which can be detached and reattached to Glass-compatible frames, which can include things like safety goggles and prescription glasses.

As its name suggests, Google Glass Enterprise edition is aimed squarely at businesses rather than end-users. That’s in part a response to the adverse reaction that people had to the devices being used in public spaces. But a 2017 article in Wired notes that even in the more controlled work environment, there are still privacy issues. One company using Google Glasses discussed the idea of installing a “bathroom bar” where people can hang their headsets to make sure that no one is snapping photos in inappropriate contexts. In offices or on the factory floor, Google Glasses might still be used to record others saying or doing things without their permission.

A recent academic research project took an interesting approach to mitigating this problem. A German team was led by Andreas Bulling, Professor of Human-Computer Interaction and Cognitive Systems at the University of Stuttgart, and head of the Perceptual User Interfaces Group at the Max Planck Institute for Informatics:

we present PrivacEye, a proof-of-concept system that detects privacy sensitive everyday situations and automatically enables and disables the first-person camera using a mechanical shutter. To close the shutter, PrivacEye detects sensitive situations from first-person camera videos using an end-to-end deep-learning model. To open the shutter without visual input, PrivacEye uses a separate, smaller eye camera to detect changes in users’ eye movements to gauge changes in the “privacy level” of the current situation.

The use of a physical shutter in this way is rather clunky, and the need for a second camera is an additional inconvenience. But what makes the work of note is the way artificial intelligence techniques are applied to detect automatically whether a social context is likely to be privacy-sensitive. If it is, the first-person camera is prevented from recording. In order to establish whether that was necessary, the researchers used a neural network that monitored the eye movements of the person wearing the first-person camera. One advantage of this approach is that it is not intrusive for other people who are present, unlike alternative systems that look at the environment in order to gauge privacy-intensive situations.

Markers of eye movements included fixations, saccades (quick, simultaneous movements of both eyes between two or more points of fixation in the same direction), blinks, pupil diameter and scan paths. The basic idea is to use the neural network to detect correlations between eye movement patterns and the level of privacy sensitivity experienced by the person wearing the camera headset. Although the work by the German team is only preliminary, they believe it has potential:

We are confident that the rapidly increasing capabilities of today’s deep neural networks will soon allow to push our proof-of-concept prototype towards an effective real-world application enabling privacy-preserving day-to-day usage of “always-on” smart glasses in real-time.

Like all tools, neural networks can be used beneficially and harmfully. The same techniques deployed to analyze eye movements in order to safeguard privacy can also be applied with less benign outcomes, as another research project led by Professor Bulling indicated. The academic paper writing up the work explained:

Besides allowing us to perceive our surroundings, eye movements are also a window into our mind and a rich source of information on who we are, how we feel, and what we do. Here we show that eye movements during an everyday task predict aspects of our personality. We tracked eye movements of 42 participants while they ran an errand on a university campus and subsequently assessed their personality traits using well-established questionnaires. Using a state-of-the-art machine learning method and a rich set of features encoding different eye movement characteristics, we were able to reliably predict four of the Big Five personality traits (neuroticism, extraversion, agreeableness, conscientiousness) as well as perceptual curiosity only from eye movements.

Once more, it’s important to note that this should be regarded as a preliminary investigation, albeit with a promising outcome. Much more research needs to be done before it can be stated definitively that eye movements are suitable for predicting people’s personality. Perhaps the most important result from the work carried out by Bulling and his fellow researchers is not what they found, but how they found it.

In both cases, raw observational data – things like gaze fixations, saccades and eye blinks – were fed into a neural network system. The AI software then sought hidden patterns in that data: in the first case, correlating eye movements with privacy-sensitive situations, and in the second, correlating the eye movements with personality traits. It seems that interesting patterns were found for both – although these need to be confirmed by other researchers, and with larger training sets.

The application of AI techniques like neural networks is not limited to eye movements. It could easily be applied to any rich personal data set. Obvious possibilities include things like heart beat, skin conductivity, voice pitch, hand movements, and walking patterns, not to mention non-physical ones like online posting patterns or digital writing styles. It may well be that some or all of these are susceptible to AI analysis to reveal patterns that those generating the data are not aware of. Once more, that information might be used for all kinds of appropriate purposes – public safety, enhanced efficiency, personal counselling etc. – or it might be used for new kinds of invisible surveillance.

As Google Glass-like products become more common in the enterprise, perhaps even creeping back into public space, so the research described above will become particularly pertinent. However, exactly the same kinds of AI-based analysis can – and will – be applied to other products that gather, perhaps incidentally, personal information. Increasingly sophisticated real-time automated analysis will inevitably reveal aspects of our private lives in unsuspected, and often unwanted, detail.

Featured image by Google X.