Facebook’s Very Bad, No Good Week: What It Means for Privacy, and How to Make Things Better

Posted on Oct 20, 2021 by Glyn Moody

On 5 October, Mark Zuckerberg sent a note to Facebook employees, beginning: “Hey everyone: it’s been quite a week, and I wanted to share some thoughts with all of you.” That’s something of an understatement in the wake of not one, but two, devastating blows to the company, both with important implications for privacy. The first was an outage of unprecedented seriousness. All Facebook’s apps – Facebook, Instragram, WhatsApp, Messenger and Oculus – began displaying error messages, and then Facebook disappeared entirely from the Internet. It wasn’t just every public app, but all Facebook’s internal systems stopped working too – scheduling tools, internal communications, security systems, and calendaring. Some people couldn’t even enter Facebook offices and conference rooms because their digital badges stopped working.

However logical it might have seemed at the time, creating that internal monoculture was clearly ill-advised, since it meant that when Facebook’s main system was down, everything was down. A note by Facebook engineering explains exactly what happened. But for many of the company’s 3.5 billion users, particularly in countries outside the West, the effects were even more dramatic. As the New York Times put it:

Facebook, Instagram, WhatsApp and Messenger have long been more than just a way to chat and share photos. They are critical platforms for doing business, arranging medical care, conducting virtual classes, carrying out political campaigns, responding to emergencies and much, much more.

In parts of the developing world, the cost of the Facebook outage was particularly acute. In India, Latin America and Africa, its services are essentially the internet for many people — almost a public utility, usually cheaper than a phone call and depended upon for much of the communication and commerce of daily life.

The fact that for many parts of the world, Facebook is essentially synonymous with the Internet gives immense monopolistic power to the company. That on its own would be worrying enough. But as numerous posts on Privacy News Online have detailed, Facebook is abusing that power to gather huge stores of personal information about its users which it then uses in order to sell advertising. In other words, billions of people are largely dependent on a privacy-hostile service. And there’s another troubling aspect. In his note, Zuckerberg wrote:

The argument that we deliberately push content that makes people angry for profit is deeply illogical. We make money from ads, and advertisers consistently tell us they don’t want their ads next to harmful or angry content. And I don’t know any tech company that sets out to build products that make people angry or depressed. The moral, business and product incentives all point in the opposite direction.

That accusation has come from many people before, but has been given new force following the testimony before a US Senate subcommittee of Frances Haugen, who worked on Facebook’s civic misinformation team for nearly two years until May. Her claims about Facebook are backed up by thousands of documents she gathered during her time with the company. One of her main criticisms is that Facebook prioritizes posts based on how many likes, shares and comments they generate, and that often means promoting false, divisive or provocative material. Such posts are foregrounded because the greater engagement they generate – whether positive or negative – is attractive for advertisers.

In his note, Zuckerberg wrote that “advertisers consistently tell us they don’t want their ads next to harmful or angry content”. That may well be so. But he fails to mention that advertisers don’t choose whether to advertise based on content, but on the detailed personal data of the person viewing the content, which allows micro-targeted placement of ads. That fact offers a way to reduce the polarizing influence of false or negative posts. If ads were placed by context, then – according to Zuckerberg himself – advertisers would shun negative posts, which would give Facebook an incentive not to prioritize them, as it does currently. If advertisers prefer positive, uplifting posts, the best way to push Facebook to promote them is to ban micro-targeting and move to context-based advertising.

Doing so would not only help to reduce Facebook’s harmful effects on people and society, it would start to move businesses away from surveillance advertising in general. However, addressing Facebook’s monopolistic grip on the online world requires other solutions. Unsurprisingly, politicians and pundits have rushed to offer their ideas. A post on Mashable sums up the four main approaches.

One would be for governments around the world to rein it in directly, using legislation. But it takes years for laws to be framed, honed and passed. By then, the technology has moved on, making them a worse fit, even assuming they were good to start with, and that’s by no means assured. Another would be to break up Facebook, for example by forcing it to sell WhatsApp. That doesn’t really address the world’s dependence on single apps. Frances Haugen herself is in favor of a different approach: transparency. She wants researchers to have full access to the company’s research and internal studies. Although a good idea in itself, it’s hard to see how that really tackles Facebook’s global monopoly.

The final option seems the best: to force Facebook to allow rival compatible services to interoperate with it. This is an idea that was suggested back in 2019 by two leading thinkers in this area. The writer and activist Cory Doctorow proposed what he called “adversarial interoperability“, while Mike Masnick, the founder of Techdirt (full disclosure: I write for the title) framed it is a “protocols, not platforms“.

Once people can move in a friction-free way to other services, while remaining in touch with friends and family on Facebook and elsewhere, we will see better approaches emerging as a result of Darwinian competition that is sorely missing today. In particular, people would be able to choose to migrate to social networks that respected privacy, rather than exploiting it, as Facebook does with such ruthless efficiency.

Featured image by Chris McKenna (Thryduulf).