Why are people so poor at making privacy choices? What can be done about it?

Posted on Nov 17, 2018 by Glyn Moody
Share Tweet Plus

Privacy News Online explores a rich mix of information about privacy. Things like threats to privacy, privacy wins, ways to enhance privacy. One common thread is how bad people are at protecting their privacy. So why is that? That’s not a topic explored much, which makes a recent feature in the Harvard Business Review particularly valuable. It goes beyond pointing out that people make poor choices when it comes to protecting their personal data, and attempts to understand the key reasons for that failure. Here are the main categories discerned by the article’s author, Leslie K. John, who is an associate professor of business administration at Harvard Business School.

Impatience

People are surprisingly willing to reveal highly-personal information for trivial rewards. This is why people are willing to provide a great deal of sensitive health information in exchange for knowing whether their “biological age” is older or younger than their calendar age. That’s an almost meaningless reward, but it’s enough to hook people into handing over key information for free. Impatience is also why there is a general reluctance to adopt privacy controls when they are available, however simple, since they are perceived as barriers to obtaining quick online gratification. Moreover, those rewards tend to be instant, while the possibly serious consequences of handing over personal data are delayed, thus giving the impression that it is a risk-free gain.

The endowment effect

Strangely, people value privacy less when they have to buy it than when they sell it. John recounts the results of an experiment she conducted with colleagues, that allowed consumers to buy or sell additional privacy protection. She found that almost 50% of people were willing to give up privacy for $2, but fewer than 10% were willing to pay $2 to obtain more privacy. This helps to explain why people are outraged when their privacy is compromised by a breach of security at a company, but not particularly interested when privacy protection is enhanced.

Illusion of control

Even the illusion of control can be enough to quell our privacy concerns. In a study, John and fellow researchers found that people are willing to accept third-party tracking, something they normally regard as invasive, if they are given a sense of control, however small. For example, people can be put at ease by something as irrelevant as a reminder that they can choose their profile pictures. Because they control their profile image, they feel as if they have a larger control. Companies exploit this effect. John found that when she opted out of dozens of targeted ads, the small print revealed that her action had only prevented specific companies from delivering targeted advertisements, but didn’t necessarily stop them from tracking her. Again, the process of opting out gives the illusion of control, which masks the more serious privacy abuse that continues.

Desire for disclosure

One core reason why we are so bad at keeping personal information private is that humans are social creatures that seem wired for sharing thoughts, feelings and information. John points to one study where even people who were very concerned about their privacy went on to divulge personal details to a chat bot, presumably because the human-like interface triggered an innate desire to communicate. Again, companies take advantage of this primal urge. Social media sites are constantly urging users to share something, and even e-commerce services try to turn financial transactions into social exchanges.

False sense of boundaries

John points out that the online world does not work like the real world, and that leads us to make bad decisions:

in the online world, the rules are different. We often don’t get the same rich, visceral feedback that tempers our behavior in the off-line world. We may have the illusion, for instance, that we’re disclosing information only to a select group of people, such as the friends in our social media feed. People get into trouble when they post rants (say, about their employer) meant for a small subset of their friends, forgetting the broader audience that can see those disclosures (say, their boss and colleagues).

People are similarly misled by a belief that because messages on services like Snapchat and Instagram can disappear, it is safe to make more reckless disclosures. Without the immediate, moderating feedback we receive in the real world, it is easy to lose a sense of context and proportion, and to plunge into ever-more revealing comments that may one day come back to haunt us. This is notably the case for public figures, who are increasingly confronted with things they wrote or said years ago. That may be less of an issue for the general public, but the risk exists, nonetheless.

John’s article rightly notes an underlying issue that drives many of the bad decisions we make online: the fact that today’s Internet ecosystem is incredibly complicated. Not only that, it’s still evolving at a breakneck speed that constantly outruns our ability to grasp all the implications of our choices and actions in the digital world:

Do you know how cookies work? Do you understand how information on your browsing history, search requests, Facebook likes, and so on are monetized and exchanged among brokers to target advertising to you? Do you know what’s recorded and tracked when you ask your digital assistant to do something? The answer is probably no. That’s a problem.

After delineating the many reasons why we make bad decisions about privacy, to her credit John then goes on to explore possible ways of minimizing the harm that can flow from them. For all its obvious drawbacks, government regulation may be a good way to go, provided it is applied with intelligence:

the real promise of government intervention may lie in giving firms an incentive to use consumers’ personal data only in reasonable ways. One way to do that is to adopt a tool used in the product safety regime: strict liability, or making firms responsible for negative consequences arising from their use of consumer data, even in the absence of negligence or ill intent

It’s clear that there are no simple answers here – if there were, we would have found them by now. But a good first step to helping people make better decisions about their privacy is to understand why so often they don’t. John’s article is a useful contribution to that effort, and is well-worth reading by anyone who cares about privacy and how to protect it.

Featured image by Ramdlon.

About Glyn Moody

Glyn Moody is a freelance journalist who writes and speaks about privacy, surveillance, digital rights, open source, copyright, patents and general policy issues involving digital technology. He started covering the business use of the Internet in 1994, and wrote the first mainstream feature about Linux, which appeared in Wired in August 1997. His book, "Rebel Code," is the first and only detailed history of the rise of open source, while his subsequent work, "The Digital Code of Life," explores bioinformatics - the intersection of computing with genomics.

VPN Service

Back to School ad for Private Internet Access

Back to School ad for Private Internet Access