We Need Privacy Laws for the Metaverse and We Need Them Now
Back in December 2021, we discussed the potential privacy risks of the metaverse on the PIA blog, the current term for what used to be called virtual reality. It might not seem like that long ago, but technology, and particularly the kind that can be used to identify you out of an immense pool of unknown users, has come a long way in that time.
Researchers found that, when you use mixed-reality headsets, you can be identified in the metaverse with over 94% accuracy just by the motion of your hand. And this is just the beginning.
If you want to explore the metaverse without being identified against your will, the same researchers came up with a plugin that scrambles your user attributes. This is an effective, albeit roundabout solution to the lack of privacy in the current iteration of the metaverse.
Metaverse Innovation Precedes Privacy
At that time, the problem was largely theoretical, but since then researchers fleshing out the bare bones of early commentary. For example, in July 2022, three researchers working at UC Berkeley’s Center for Responsible Decentralized Intelligence investigated what they later called the “Unprecedented Privacy Risks of the Metaverse.” The basic format was for subjects to complete a series of tasks within a “Virtual Escape Room”. As the participants carried out those tasks, the researchers analyzed the actions and reactions to determine a number of key personal features. One person that participated in the research, the VR pioneer Louis Rosenberg, wrote an article for VentureBeat summarizing the most important finding:
the researchers were able to use my interactions in the escape room to predict my height, the length of my arms (wingspan), my handedness, my age, my gender, and basic parameters about my physical fitness level, including how low I could crouch down and how quickly I could react to stimuli. They were also able to determine my visual acuity, whether I was colorblind, and the size of the room that I was interacting with, and to make basic assessments of my cognitive acuity. The researchers could have even predicted whether I had certain disabilities.
The experiment also allowed the researchers to establish a participant’s location. Rosenberg notes that even using a VPN would not have preserved that private information: metaverse applications typically ping multiple servers, allowing a kind of triangulation to be used to establish the physical location of the user.
In another article, Rosenberg explained how personal data could translate into new kinds of predatory practices, including virtual product placements and virtual spokespeople.
Your Data Is Up for Grabs in the Metaverse
But more recent work by researchers at UC Berkeley and elsewhere reveals that the privacy problems of the metaverse run even deeper than the above suggests. It turns out that the most basic data stream produced by interactions with a virtual world – simple motion data – is enough to identify a user with very high accuracy.
Just three data points are needed: the motion of the user’s head and of each hand. The researchers found that a user can be uniquely identified among a pool of over 50,000 people with 94% accuracy from 100 seconds of motion data. With 10 seconds of data, their system still achieved an accuracy of 73%. Rosenberg comments:
Even more surprising was that half of all users could be uniquely identified with only 2 seconds of motion data. Achieving this level of accuracy required innovative AI techniques, but again, the data used was extremely sparse — just three spatial points for each user tracked over time.
In other words, any time a user puts on a mixed reality headset, grabs the two standard hand controllers and begins interacting in a virtual or augmented world, they are leaving behind a trail of digital fingerprints that can uniquely identify them.
These results naturally raise the question: can anything be done to preserve privacy in the metaverse, or are we about to enter a digital world where we can always be tracked? Rosenberg has no doubts:
Metaverse Privacy Is Possible, But Unlikely Without Laws
It’s critical that we regulate this domain, requiring third parties to overtly inform us whenever we’re interacting with agenda-driven agents controlled by intelligent algorithms. This is especially important if those algorithms are also monitoring our reactions, for example assessing our posture, our breathing, and even our blood pressure, enabling conversational agents to skillfully adjust their messaging strategy in real time. This extreme level of interactive manipulation will happen unless it’s formally restricted.
Drawing up regulations and getting them passed is a slow and difficult process – look how long it is taking to get even basic privacy protections in the US, and how fiercely companies lobbied against the EU’s GDPR.
In fact, something called the XRSI Privacy and Safety Framework was released back in 2020. It’s based on a wide range of related standards, guidelines, and best practices. It incorporates privacy requirements drawn from the GDPR, National Institute of Standards and Technology guidance, Family Educational Rights and Privacy Act, and the Children’s Online Privacy Protection Rule. It’s an interesting document, but it’s not clear that it has had any impact on protecting privacy in the metaverse since it was published.
How To Protect Your Privacy in the Metaverse
While we are waiting for legislatures to catch up technology, there may be an alternative approach. Inspired by the popularity of “incognito mode” on Web browsers, the UC Berkeley team has developed an open source client-side plug-in that seeks to achieve a similar result for metaverse use.
The basic idea is simple: to add small amounts of digital noise to the sensitive user attributes that can be extracted from metaverse data streams, as described above. Effectively, the data streams “lie” about the person using the system, making them seem taller, slower, older, etc. than they really are.
The degree of noise can be adjusted, so that situations that require higher degrees of data accuracy – for example gaming – can still function with at least minimal privacy protection. For particularly sensitive applications, high levels of data obfuscation can be applied. Using this approach, it is even possible to blunt the impact of the server triangulation problem by artificially delaying data packets to hide the real round-trip times.
It’s a clever approach that helps you achieve more privacy in the metaverse, but with the obvious downside of requiring a plug-in to be installed on the client system – not something that all ordinary users may be willing or able to do. The fact that something as sophisticated as personal data obfuscation is needed is an indication of the challenges that lie ahead if we wish to preserve our privacy in virtual worlds.
Featured image created with Stable Diffusion.