This is where workplace surveillance leads: towards algorithmic, automated management

Posted on Mar 5, 2020 by Glyn Moody

A couple of years ago, Privacy News Online wrote about a new kind of surveillance, taking place in the workplace. The aim of these systems back then was to keep an eye on workers, and they were often designed to spot problems. But two years is a long time in today’s digital world, and things have moved on considerably. For example, in 2017 artificial intelligence (AI) was already applied to workplace monitoring, but largely to help analyse working patterns, and to flag up anomalies. Today’s AI is more capable, and much more interventionist. It is no longer content to sit back metaphorically and merely watch workers go about their business; now it is starting to control them actively. A report from Data & Society describes this as “algorithmic management“:

[Its] emergence in the workplace is marked by a departure from earlier management structures that more strongly rely on human supervisors to direct workers. Algorithmic management enables the scaling of operations by, for instance, coordinating the activities of large, disaggregated workforces or using data to optimize for desired outcomes like lower labor costs.

The report picks out five key elements of algorithmic management: “prolific data collection and surveillance of workers through technology”; real-time responsiveness; automated or semi-automated decision making; performance evaluations made by AI systems based on relatively simple metrics; and the use of “nudges” and penalties to influence the behavior of workers.

The Data & Society report points out that many of these features first appeared in companies operating as part of the “gig” economy – for example, Uber. The use of a shifting, distributed workforce requires this kind of “continuous, soft surveillance” to function at all. However, what is striking is how traditional industries are also embracing this approach, even though it is not strictly necessary.

This is demonstrated by a long article on The Verge, which explores the reality of algorithmic management for the people who must work under it. Stories include hotel housekeepers who are ordered by software to clean rooms in ways that are more demanding and draining for them, and yet produce few if any benefits for hotel visitors. Unsurprisingly, the efficiency-obsessed Amazon features. It’s not just the exhaustion that relentless algorithmic management can lead to, but an elevated rate of worker injuries, too. Perhaps the easiest job to monitor and manage algorithmically is writing code. Every keystroke can be captured, every pause noted, and the computer’s webcam can be used to carry out video surveillance of the programmer as he or she works. Here’s the experience of Mark Rony, a software engineer in Dhaka, Bangladesh, using the “productivity measurement tool” WorkSmart, as described in The Verge article:

The software tracked his keystrokes, mouse clicks, and the applications he was running, all to rate his productivity. He was also required to give the program access to his webcam. Every 10 minutes, the program would take three photos at random to ensure he was at his desk. If Rony wasn’t there when WorkSmart took a photo, or if it determined his work fell below a certain threshold of productivity, he wouldn’t get paid for that 10-minute interval. Another person who started with Rony refused to give the software webcam access and lost his job.

The experience of Angela, who worked in an insurance call center, points to what is likely to become an increasingly serious issue with the algorithmic management of jobs that require inter-personal skills. A key quality in this field is “empathy”, so the workplace surveillance naturally seeks to measure this, and to push workers to show more of it. But there’s a big problem here:

It’s become conventional wisdom that interpersonal skills like empathy will be one of the roles left to humans once the robots take over, and this is often treated as an optimistic future. But call centers show how it could easily become a dark one: automation increasing the empathy demanded of workers and automated systems used to wring more empathy from them, or at least a machine-readable approximation of it. Angela, the worker struggling with [the call center evaluation software] Voci, worried that as AI is used to counteract the effects of dehumanizing work conditions, her work will become more dehumanizing still.

That experience exposes some other major issues with using automated systems to manage people. For example, it seems naive in the extreme to think that something as complex and elusive as “empathy” can be not only detected by sets of algorithms, but given some kind of grading in order to request modifications to the worker’s behavior. That’s true of many other aspects of working. It’s simply not possible to distil the skills of a good human manager into lines of AI code, now matter how many or how well written – at least, not yet.

Moreover, trying to do so brings with it the usual challenges encountered with AI systems. For example, the assumptions underlying the programs may contain hidden biases. And even if these are minimal, the algorithmic managers remain black boxes for workers, who are never told why a decision is made about them, or given any opportunity to appeal against it. Indeed, moving from traditional to computer-based management provides a veneer of objectivity that is likely to make it even harder for workers to challenge decisions, since it seems that they are dispassionate and logical.

The key point is that this increasingly insensitive and ultimately counterproductive way of managing people is only possible because of continuous and thoroughgoing workplace surveillance. One way to stop this loss of humanity is to fight the loss of privacy that underpins it.

Featured image by Scott Lewis.