Privacy is paramount to us, in everything we do. So today, we are announcing a new initiative to develop a set of open standards to fundamentally enhance privacy on the web. We’re calling this a Privacy Sandbox.
As a slogan, that sounds great. But as with Facebook, the reality is not so rosy. Google rightly points out one of the central problems with the modern Web: the detailed tracking of what people do online. The “justification” for this surveillance is that it allows ads to be targeted. As this blog has noted, in order to carry out micro-targeting, huge quantities of personal information are gathered. This has understandably led many people – around a third of all users according to some estimates – to block cookies in order to prevent persistent stores of their online activity and thus their lives being created by sites and advertisers.
Google says “large scale blocking of cookies undermine people’s privacy by encouraging opaque techniques such as fingerprinting.” Fingerprinting – creating a unique profile of your computer, software, add-ons, screen size and fonts – is certainly an issue for privacy. But it’s one that can be addressed, as Mozilla is already doing with its Firefox browser. Google offers another reason why apparently people shouldn’t block cookies:
blocking cookies without another way to deliver relevant ads significantly reduces publishers’ primary means of funding, which jeopardizes the future of the vibrant web. Many publishers have been able to continue to invest in freely accessible content because they can be confident that their advertising will fund their costs. If this funding is cut, we are concerned that we will see much less accessible content for everyone. Recent studies have shown that when advertising is made less relevant by removing cookies, funding for publishers falls by 52% on average
It’s worth exploring this argument, because it is often used to justify blanket surveillance of everyone’s Web activity. The 52% figure crops up on another Google blog post: “Traffic for which there was no cookie present yielded an average of 52 percent less revenue for the publisher than traffic for which there was a cookie present.” But what does that mean? No details are given by Google, but presumably it means that if a referral did not come with a privacy-endangering cookie, then advertisers would pay less. So it is not that people who blocked cookies were actively causing publishers to lose money, but that advertisers were unreasonable in what they demanded by way of personal information.
It may be that the fault lies with publishers refusing to stand up to advertisers. Privacy News Online reported earlier this year that the New York Times stopped using behavioral targeting on pages designed for the European market, because of concerns that it might be illegal under the GDPR – court cases currently underway in the EU are set to test this. Even though it did not provide cookie-based information to advertisers, but used contextual and geographical targeting, the newspaper’s ad revenue did not drop. It’s only a single data point, but a highly suggestive one.
As that post emphasized, the real problem is micro-targeting. This makes Google’s “solution” to abuses of this approach largely irrelevant. The Internet giant wants to create what it calls a Privacy Sandbox“:
a secure environment for personalization that also protects user privacy. Some ideas include new approaches to ensure that ads continue to be relevant for users, but user data shared with websites and advertisers would be minimized by anonymously aggregating user information, and keeping much more user information on-device only. Our goal is to create a set of standards that is more consistent with users’ expectations of privacy.
There are undoubtedly some interesting ideas here. For example, using differential privacy to deliver ads to large groups of similar people without letting individually-identifying data ever leave the browser. Also worth noting is federated learning, which allows collaborative machine learning without centralized training data. But both of these approaches, however clever, miss the central point. People don’t wish to be tracked less, they want no tracking at all. And it’s not just because they are trying to preserve their privacy: they also want to avoid the risk of being manipulated by such ads.
An excellent post on the Freedom to Tinker blog about Google’s plans notes that the company simply cannot make privacy “paramount” without destroying its core business model, which is based on tracking what people do, and selling that information to advertisers. If it were to make privacy paramount, Google would probably go out of business. However, it knows that people are realizing the central importance of privacy in the online world. It has to be seen to be responding to this requirement. What it is proposing is an awkward compromise that tries to shield its users to a certain extent, while keeping its advertisers happy too. But like most compromises, it will fail to satisfy either group.
Google’s Privacy Sandbox is just a proposal, which the company expects will take “multiple years” to implement, assuming it ever comes to fruition. During that time, people’s privacy will still be at risk. That’s unacceptable. People need proper data protection now, not a compromised form at some vague point in the future. The only solution is for micro-targeted advertising to be discarded in favor of contextual advertising. It worked for other media like TV, radio, and print in the past, still works for them today, and will work perfectly well for the online world. Google’s Privacy Sandbox “solution” is more like Privacy Sand in the Gears: it just impedes real progress, and should be ignored.
Featured image by Herzi Pinki.