Apple’s Vision Pro Makes Developing Ways to Protect Privacy in the Metaverse Even More Urgent

Posted on Feb 21, 2024 by Glyn Moody

At the end of 2021, we warned that the new generation of virtual reality (VR) systems posed a huge and largely ignored threat to privacy. That post was prompted by the re-branding of Facebook as Metaverse – a move that ultimately proved something of a damp squib, as Zuckerberg’s company failed to convince many people to join the VR world. Two years later, Zuckerberg has now jumped on the artificial intelligence bandwagon, although he insists that he is still investing in the VR sector. That may or may not be a sound strategic move, but it does suggest that the VR gamble hasn’t really paid off in the way he hoped when he renamed his entire company to reflect a focus on the technology. 

Moreover, with the launch of its Vision Pro, Apple has established itself as the leader in this sector – at least judging by the positive reviews of its new headset and associated technologies. However, a column by Geoffrey A. Fowler in the Washington Post has a more critical take on the device (as its title, Apple’s new Vision Pro is a privacy mess waiting to happen) makes clear:

Adding to my concern is that Apple, which has staked its reputation on privacy, wouldn’t answer most of my questions about how the Vision Pro will tackle these problems. Nor has it, to date, allowed The Washington Post to independently test the hardware.

Fowler notes two serious issues. One is that in order to work, the Vision Pro needs to create a map of the real world around the user. But as Fowler writes:

On a basic level, the Vision Pro might know it’s in a room with four walls and a window and a 12-foot ceiling — so far, so good, [former policy lead on sensor data at Meta’s Reality Labs] Jerome says. But then add in that you’ve got a 75-inch television, suggesting you might have more money to spend than someone with a 42-inch set. Since the device can understand objects, it could also detect if you’ve got a crib or a wheelchair or even drug paraphernalia, he says.

Advertisers and data brokers who build profiles of consumers would salivate at the chance to get this data. Governments, too.

The granularity of the location and biometric data that the Vision Pro is gathering is extremely fine, allowing inferences to be made about users’ activities and interests that were impossible before. This is particularly problematic thanks to the new generation of AI systems that are able to bring together large amounts of data and then extract plausible inferences from it. Fowler says that Apple didn’t answer his questions about how much it can monitor what Vision Pro apps do with the data the headset gathers, or how it intends to vet and control them. On its site, Apple writes that “Data from cameras and sensors is processed at the system level, so individual apps do not need to see your surroundings to enable spatial experiences.” However, that suggests that the data might be transmitted if users give permission. That could lead to apps encouraging the transmission of such data in order to “improve” the VR experience, just as apps today encourage people to give permission for tracking cookies to “improve” the browsing experience. 

Apple is clearly aware that there are serious privacy issues here, because for one particular kind of data it already promises stronger control. It says that eye input – where the user is looking – “is not shared with Apple, third-party apps, or websites. Only your final selections are transmitted when you tap your fingers together.” That’s good, but suggests that other kinds of personal data could be shared in some circumstances. 

The other major issue raised by Fowler concerns not the environment in which the Vision Pro is used, but the user. This is something we discussed a year ago in a post that quoted work from a group of researchers at UC Berkeley and elsewhere. It showed how even the most basic data stream produced by interactions with a virtual world – simple motion data – is enough to identify a user from within a pool of 50,000 people with very high accuracy. Moreover, the same group showed that over 40 personal attributes could be accurately and consistently inferred from VR motion data alone.

This means that VR data is potentially even more harmful to privacy than ordinary browsing data, and raises the important question: how can key personal data be protected when using virtual reality? The same UC Berkeley group has looked at that problem in another preprint on arXiv. The researchers have come up with a method that they compare to a real-time voice changer. The latter masks the perceived identity of a voice without altering important features such as intelligibility or the inflection. Similarly, the UC Berkeley researchers suggest that VR anonymization might be achieved by changing the “personal” characteristics of a personal VR data flow without changing important elements such as the speed of movement or the key sequential control actions.

In the wake of the excitement generated by Apple’s Vision Pro, research exploring ways in which personal data flows from VR headsets, and how that data might be abused, becomes even more urgent. If effective techniques to minimize the threat to privacy are not developed soon, there is a risk that VR headsets will become not just the most dangerous surveillance device so far – surpassing even the smartphone – but one that people willingly strap to their heads for hours at a time.