Key questions raised about algorithmic transparency by new GDPR case brought against Uber by its drivers

Posted on Jul 29, 2020 by Glyn Moody
Uber_taxi_in_Moscow

Back in 2017, this blog noted a new threat to privacy from the increasing use of workplace surveillance. Once people’s work is quantified automatically, it can then be used for algorithmic management, as we described this year. The coronavirus lockdown has led to millions of people working from home for the first time. As well as presenting numerous issues for workers, it also brings with it new challenges for managers. Some fear that people aren’t working as efficiently as they could, when at home, and this has presented an opportunity for office surveillance systems. For example, MIT Technology Review discusses Enaible:

It is developing machine-learning software to measure how quickly employees complete different tasks and suggest ways to speed them up. The tool also gives each person a productivity score, which managers can use to identify those employees who are most worth retaining – and those who are not.

At the heart of the system lies an algorithm called Trigger-Task-Time. As input, it takes the typical workflow for different workers: what triggers, such as an email or a phone call, lead to what tasks, and how long those tasks to complete. The software uses this data to assign each worker a “productivity score” between 0 and 100. Since this approach can be applied to any task, in theory workers across a company can be compared by their scores, even if they do different jobs. A productivity score also reflects how a person’s work increases or decreases the productivity of other people on their team.

However useful this may or may not be for managers, it is clearly problematic for those whose workflows are being examined so minutely, not least in terms of their privacy. In particular, they are subject to surveillance by an inscrutable algorithm, using unknown criteria, whose output could mean the difference between keeping and losing a job. In many parts of the world, that’s just a fact of modern employment. However, an important aspect of these algorithms is that they operate on data that refers to a person. As such, in the EU they are subject to the GDPR. Article 22 of the data protection law deals specifically with algorithms:

The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

There are a number of qualifications to that requirement, including if the person involved gives their explicit consent. However, in that case there is another proviso, which requires the use of “suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.”

This complicated matter is made more difficult by the fact that the GDPR has only been in operation for two years. It lacks court cases that might shed light on exactly how algorithms may be applied to personal data in the light of Article 22. That makes the following legal action against Uber, reported by The Guardian, of considerable importance:

Minicab drivers will launch a legal bid to uncover secret computer algorithms used by Uber to manage their work in a test case that could increase transparency for millions of gig economy workers across Europe.

Two UK drivers are demanding to see the huge amounts of data the ride-sharing company collects on them and how this is used to exert management control, including through automated decision-making that invisibly shapes their jobs.

As the article rightly notes, the outcome of this case could have major ramifications for other companies operating under the GDPR. Specifically, those in the so-called “sharing economy”, based on self-employed workers, will be directly affected. But assuming the court rules that algorithmic management must follow the GDPR, with specific privacy safeguards, then many other companies in the EU that use or are considering using office surveillance systems may find that they can do so only in circumscribed ways.

More generally, a ruling on the use of algorithms as part of the decision-making process regarding staff is likely to have important implications for the general issue of algorithmic accountability, discussed on this blog back in 2017. People are waking up to the implications of embedding algorithms in all kinds of systems. That includes ones that are literally a matter of life or death – for example, those involving health and safety – as well as relatively minor ones in everyday life, such as the use of algorithms to determine which ads we see online.

Many governments are working on new frameworks to regulate this new area. For example, New Zealand has drawn up an an Algorithm Charter, which it says is a “commitment by government agencies to carefully manage how algorithms will be used to strike the right balance between privacy and transparency”. The European Commission has announced that it is carrying out an in-depth analysis of algorithmic transparency. The European Parliament has already released a governance framework for algorithmic accountability and transparency, which calls for regulatory oversight and legal liability of the private sector, and a global approach to algorithmic governance. Once the EU has formulated and passed laws that explicitly address key issues of algorithmic transparency, they are likely to have a major impact around the world, as is already starting to happen with the GDPR.

Featured image by Hipsta.space.