Facebook doesn’t care about your privacy, it just wants to farm your personal data – these confidential documents prove it

Posted on Dec 15, 2018 by Glyn Moody

Privacy News Online has published a number of stories about privacy problems with Facebook. But what the company’s top management really thinks about the personal information it holds – as opposed to its public statements that “Protecting the privacy of the people on Facebook is of utmost importance to us” – has been something of a mystery. Until now. A senior member of the UK Parliament has released confidential Facebook documents from a court case involving Six4Three, makers of an app that allowed users to search their friends’ photos for bathing suit pictures. The emails and memos provide a rare glimpse of the honest opinions of Mark Zuckerberg and his senior managers.

Readers of this blog know how important VPNs are for protecting privacy – and how vital it is to be able to trust a VPN supplier. Facebook acquired the VPN Onavo, and claims that “millions of users around the world use Onavo’s mobile apps to take the worry out of using smartphones and tablets.” A Facebook user might naively assume that this demonstrates that the company is not only concerned about privacy, but is generously seeking to protect it by providing a free VPN service. So the revelation that Facebook has been using Onavo to spy on people is pretty shocking. As the UK lawmaker who released the Facebook documents put it:

Facebook used Onavo to conduct global surveys of the usage of mobile apps by customers, and apparently without their knowledge. They used this data to assess not just how many people had downloaded apps, but how often they used them. This knowledge helped them to decide which companies to acquire, and which to treat as a threat.

Tracking people without their knowledge or consent is bad enough. Subverting a VPN to do that is an extraordinary betrayal of trust. Android users of Facebook’s app were even worse off, since Facebook made changes to its software so that it could spy on their telephone calls and text messages as well as on their use of apps. The company went out of its way to hide from users what was going on:

Facebook knew that the changes to its policies on the Android mobile phone system, which enabled the Facebook app to collect a record of calls and texts sent by the user would be controversial. To mitigate any bad PR, Facebook planned to make it as hard [as] possible for users to know that this was one of the underlying features of the upgrade of their app.

Moreover, this was a calculated decision to undermine people’s privacy in the hope that no one would notice. One email warned: “This is a pretty high-risk thing to do from a PR perspective but it appears that the growth team will charge ahead and do it.” The documents reveal that Facebook’s fundamental view of users’ personal data was as a resource to be exploited to the maximum for growth and for profit. As the UK lawmaker summarized:

It is clear that increasing revenues from major app developers was one of the key drivers behind the Platform 3.0 changes at Facebook. The idea of linking access to friends data to the financial value of the developers relationship with Facebook is a recurring feature of the documents.

Facebook was always hungry for more personal data that it could use to sell advertising. For this reason it wanted “full reciprocity” from companies developing apps that linked to Facebook accounts. Mark Zuckerberg wrote in one memo:

I think we should go with full reciprocity and access to app friends for no charge. Full reciprocity means that apps are required to give any user who connects to FB a prominent option to share all of their social content within that service back (ie all content that is visible to more than a few people, but excluding 1:1 or small group messages) back to Facebook.

In addition, it turns out that after the platform changes in 2014/15 Facebook allowed certain privileged companies to maintain full access to friends’ data. It is not clear that there was any user consent for this, nor how Facebook decided which companies should be “whitelisted” in this way. It is this cavalier attitude to people’s privacy that allowed the Cambridge Analytica disaster to happen.

Similarly, it is Facebook’s obsession with gathering ever-more personal data, and using it to allow advertisers to target people with pinpoint precision that allowed murky organizations linked to Russia to place divisive ads during the US presidential campaign. In the documents released by the UK Parliament, Mark Zuckerberg provided a perfect summary of why these privacy disasters kept happening:

We’re trying to enable people to share everything they want, and to do it on Facebook. Sometimes the best way to enable people to share something is to have a developer build a special purpose app or network for that type of content and to make that app social by having Facebook plug into it. However, that may be good for the world but it’s not good for us unless people also share back to Facebook and that content increases the value of our network. So ultimately, I think the purpose of platform – even the read side – is to increase sharing back into Facebook.

This goes to the heart of the problem with Facebook. What Zuckerberg calls “sharing” is really nothing less than constant surveillance: gathering as much information about people’s lives, activities and interests as possible. Not in order to use against them, as government surveillance might do, but for the purpose of selling ads. In his response to the release of the confidential Facebook documents, Zuckerberg explains that this business model came about as a result of trying to manage the transition to a world where most people use Facebook on mobile devices, rather than on desktop systems:

Ultimately, we decided on a model where we continued to provide the developer platform for free and developers could choose to buy ads if they wanted. This model has worked well. Other ideas we considered but decided against included charging developers for usage of our platform, similar to how developers pay to use Amazon AWS or Google Cloud. To be clear, that’s different from selling people’s data. We’ve never sold anyone’s data.

Although that may be strictly-speaking true, Facebook does sell access to its users based on that data. So even if the personal data is screened from advertisers, they can still reach exactly the people they want as if they had direct access. This has led to the rise of micro-targeted ads, with all the problems that are now beginning to be appreciated.

Sadly, this indifference to the privacy of users is not the only case of Facebook’s management saying one thing, and doing another. Some journalists working as fact checkers for Facebook have recently decided to end their partnership because they say the company has ignored their concerns and failed to use their expertise to combat misinformation. In other words, it was all PR – just like the company’s protestations about the “utmost importance” of protecting the privacy of its users.

Featured image by George Hodan.