Dark Patterns: How Tech Companies Use Interface Design to Undermine Online Privacy

Posted on Jul 14, 2018 by Glyn Moody

The EU’s General Data Protection Regulation (GDPR) came into force back in May. One reason many people know about the GDPR is because they were bombarded with emails asking them to accept updated privacy policies as a result. Another is that some companies have required people to agree to new terms and conditions when they logged on to a service for the first time after the GDPR came into force. Sometimes, they are given the option to accept various uses for their personal data: for customisation, targeted ads, market research etc. That might seem like a good thing, since it appears to implement the GDPR requirement that users must give explicit permission for the ways in which companies use their personal data.

However, even though users theoretically can change their privacy settings to optimize protection for their personal data, they may not do so. In part, that’s because it requires effort, and people often simply accept the defaults. Moreover, it turns out there are other issues because of the use of “dark patterns” in screens supposedly helping the user control their privacy settings. The term was coined back in 2011 by Harry Brignull, an expert in user interface design. Here’s his definition:

A dark pattern is a user interface carefully crafted to trick users into doing things they might not otherwise do, such as buying insurance with their purchase or signing up for recurring bills. Normally when you think of “bad design,” you think of the creator as being sloppy or lazy — but without ill intent. Dark patterns, on the other hand, are not mistakes. They’re carefully crafted with a solid understanding of human psychology, and they do not have the user’s interests in mind.

Brignull runs a site called Dark Patterns, which includes a “hall of shame” with real-life examples of dark patterns, and a list of common types. One of these is “Privacy Zuckering”, where “You are tricked into publicly sharing more information about yourself than you really intended to. Named after Facebook CEO Mark Zuckerberg.” A free report, “Deceived by Design“, funded by the Norwegian Consumer Council, reveals that top sites have recently been engaging in “Privacy Zuckering” to undermine the GDPR and its privacy protections. The report explores how Facebook, Google and Microsoft handled the process of updating their privacy settings to meet the GDPR’s more stringent requirements. Specifically, the researchers explored a “Review your data settings” pop-up from Facebook, “A privacy reminder” pop-up from Google, and a Windows 10 Settings page presented as part of a system update. Both Facebook and Google fare badly in terms of protecting privacy by default:

the Facebook GDPR popup requires users to go into “Manage data settings” to turn off ads based on data from third parties. If the user simply clicks “Accept and continue”, the setting is automatically turned on. This is not privacy by default.

Similarly, Google’s settings for ads personalisation and for sharing web & app activity requires the user to actively go into the privacy dashboard in order to disable them. On the other hand, the settings to store Location History, Device Information, and Voice & Audio Activity are turned off until the user actively enables them.

The report underlines how the seemingly-innocuous choice of colors is, in fact, a dark pattern that tries to nudge users in a particular direction:

In Facebook’s GDPR-popup, the interface was designed with a bright blue button enticing the users to “Agree and continue”. Taking the easy path by clicking this button, took the user to a new screen about face recognition, with equivalent similar button to accept and continue.

On the other hand, users who wanted to limit the data Facebook collects and how they use it, had to first click a grey box labelled “Manage data settings”, where they were led through a long series of clicks in order to turn off “Ads based on data from partners” and the use of face recognition technologies. This path was, in other words, considerably longer.

Once again, Google was just as bad. It offered a blue button to accept and continue using the service, whereas finding alternative options required clicking through multiple submenus, and even leaving the pop-up window and moving into Google’s privacy dashboard. Although Microsoft adopted a better approach in general, it made some subtle attempts to nudge users in directions that reduced privacy protection. For example, in the section controlling whether to allow applications to use an Advertising ID – a wide-ranging identifier for tracking – the icon for the “Yes” choice is an arrow hitting its target, while the “No” choice had an empty target. Moreover, the opt-in choice was placed at the top.

Another important way that companies seek to influence our online privacy choices is through framing. Facebook presented the option of turning on facial recognition – a highly problematic practice as far as privacy is concerned – as a way to “help protect you from strangers using your photo”. Similarly, Google warned users that if they chose to turn off Ads Personalization, “You’ll still see ads, but they’ll be less useful to you”. It fails to explain the benefits of taking this route, so it seems there is none. Another powerful kind of dark pattern used by all three companies involves what the new report terms “forced actions and timing”:

When the user is trying to access their account, a popup that leaves no choice but to agree or leave is a powerful nudge toward the quick and easy solution. For example, if the popup appeared just when the user is trying to read a message, or access information on an event they are about to attend, the immediacy of the task can reduce the likelihood that the users take the time to read through what they are actually agreeing to.

The technique is particularly effective on mobile devices. With smaller screens, it is harder to scroll through and select from a variety of options, or to read long user agreements. In these circumstances, many people will simply agree to the defaults. Anxiety-inducing dark patterns that make it seem that users risk losing data or even access to their account, if they don’t quickly agree to the terms and conditions, effectively undermine a core feature of the GDPR, that:

‘consent’ of the data subject means any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her;

The new “Deceived by Design” report reveals in detail the gulf between what the GDPR is trying to achieve in terms of giving people meaningful control over their personal data, and the reality of what major online companies offer. Facebook and Google may pay lip service to the GDPR requirement that users must give “specific, informed and unambiguous” consent to operations affecting their privacy. But a close inspection of the design and logic of their GDPR pop-ups and Web pages suggests that it is little more than window-dressing.

That approach may turn out to have serious consequences for the companies. As Privacy News Online reported last month, the data protection expert Max Schrems has already filed four complaints against Google, Facebook, WhatsApp and Instagram over “forced consent” – the fact that users don’t have a real choice about whether to agree to the new terms and conditions. Any company using dark patterns of the kind described by the new report may well face similar formal complaints over their approach to implementing the GDPR’s new rights. It might be relatively easy to nudge users to give up their privacy rights, but it won’t be so simple convincing EU courts not to impose huge fines for doing so.

Featured image by Alex Borland.