From Surveillance Capitalism to “Influence Government”: Using Microtargeted Ads to “Nudge” People’s Everyday Behavior

Posted on Oct 8, 2021 by Glyn Moody

Privacy News Online has written a number of times about “surveillance capitalism“, and its use of micro-targeted advertising to influence people’s buying decisions. But the worrying power of such highly-targeted advertising is not restricted to the world of commerce. As the Cambridge Analytica saga shows, it is also deployed in the world of politics, to encourage people to vote for candidates and to support particular policies. Some fascinating work from the Scottish Centre for Crime and Justice Research (SCCJR), looks at how the UK government has drawn on micro-targeted advertising in order to modify the everyday behavior of certain groups of people – what the researchers call “influence government“:

This is not simply a case of algorithms being used for sorting, surveilling, and scoring; rather this suggests that targeted interventions in the cultural and behavioural life of communities are now a core part of governmental power which is being algorithmically-driven, in combination with influencer networks, traditional forms of messaging, and frontline operational practices.

As the report explains, this draws on an older idea known as “nudge theory“, which uses positive reinforcement and indirect suggestions as a way to influence people’s behavior. Wikipedia quotes one of the best-known examples of nudge theory: placing the image of a housefly on the men’s room urinals at Amsterdam’s Schiphol Airport, which is intended to “improve the aim.” Back in 2010, the UK government set up a formal “Nudge Unit“, which was given the task of using nudge theory to improve public services. In 2014 it was spun out as a limited company called the Behavioural Insights Team. It now has offices around the world.

One of its most recent reports is called “The Behavioural Economy”, a 10-point manifesto setting out how governments, regulators and central banks can use “behavioural levers and nudges” to deliver “real positive change”. The SCCJR looks at how the UK government has been applying nudge theory by combining it with micro-targeted advertising. One trivial example was using data about people who had bought candles and targeting them through their smart speakers with fire safety adverts. A more sophisticated use of micro-targetting was employed by the National Crime Agency, the UK’s FBI:

adverts, targeted at UK adolescents between the age of 14 and 20 with an interest in gaming, are calibrated to appear when users search for particular cybercrime services on Google, informing them that these services are illegal and that they face NCA action if they purchase them. Beginning as simple text-based adverts, the NCA developed them across a six month campaign in consultation with behavioural psychologists and using the data they were collecting from their operational work. They additionally linked these adverts to hashtags for major gaming conventions (assuming from their debriefing interviews and the academic literature a link between gaming and cybercrime), and purchased advertorials discussing the illegality of these services on major gaming websites. Finally, they developed video adverts … for circulation on YouTube.

Apparently, the campaign was successful: there was no growth in the purchase of Denial of Service attacks in the UK, when these attacks were rising sharply in other, comparable nations. However, the SCCJR report raises a number of concerns with this approach. For example, it asks who is allowed to draw on nudge techniques, who sets priorities for their deployment, and what is the transparency and accountability of these processes.

The lack of transparency of these methods is of particular concern. This is the same deep problem with micro-targeted advertising. Where traditional ads are visible to all, and can therefore be monitored and criticized if need be, micro-targeted ones are only seen by a few people in particular groups. Similarly, traditional nudge techniques – things like minimum unit pricing for alcohol, changes to cigarette displays, and anti-homeless spikes installed in doorways – may be targeted at particular populations, but their wider visibility allows at least a degree of accountability and critique, since they can provoke public outrage and be reported on by journalists. The latest digital nudge techniques are seen only by the intended audience, making them invisible and able to avoid the usual accountability. On the other hand, the researchers note that the use of targeted digital nudges may be justified in certain circumstances to counter other forms of digital abuse:

There are a wealth of areas in which targeted advertising and influence approaches are being used in co-ordinated campaigns by malicious actors, from the spread of illicit cybercrime services, to the targeting of vulnerable people with scams, to attempts by far-right, misogynist, racist, and queerphobic groups to spread hateful narratives and radicalise. There is a compelling argument to be made that the state has some duty to either counter these malicious influence campaigns directly on the same terms, or to support communities in doing this work themselves.

However, an alternative, arguably better approach to fighting the use of micro-targeted ads by malicious actors would be to get rid of surveillance advertising altogether, by forbidding companies from gathering such detailed personal data in the first place. This would not only improve the general privacy of online users, but also prevent bad actors from abusing it. It would also stop governments from being tempted to “nudge” its citizens in ways that may not be in the latter’s best interests. That’s a real danger if it can be carried out with minimal awareness by members of the public and media, as has already happened in the UK.

Featured image by Genusfotografen (Tommas Gunnarsson) / Wikimedia Sverige.