Big win for online freedom in EU: key parts of France’s new “hate speech” law ruled unconstitutional
One of the most worrying trends in today’s online world is a move by governments against “hate speech”. That’s a vague term in itself, so policing it is hard. Making things even worse, recent moves to rein in such hate speech typically involve placing the responsibility with online platforms such as Google, Facebook and Twitter. This effectively outsources censorship to private companies, which makes it much harder to scrutinize what they are doing, and why. Moreover, they will naturally tend to take down material which may or may not be “hate speech”, in order to avoid often major fines that can be imposed if they do not.
One of the first major laws against hate speech came from Germany, which has long grappled with the after-effects of its troubled past. From 1 January 2018, any Internet platform with more than 2 million users must have effective ways to report and delete potentially illegal content. As Deutsche Welle wrote at the time:
As of Monday, content such as threats of violence and slander must be deleted within 24 hours of a complaint being received, or within seven days if cases are more legally complex. The companies are also obliged to produce a yearly report detailing how many posts they deleted and why.
If the deadlines are breached, companies can be fined up to €50 million ($60 million), and people can report violations to Germany’s Federal Office of Justice (BfJ), which has made an online form available for the purpose.
The new law, known as NetzDG (short for “Netzwerkdurchsetzungsgesetz”, or network enforcement law), was fiercely opposed by digital rights groups, but in vain. David Kaye, the United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, also voiced his concerns. He worried that the new law would harm not just anonymous expression, but also privacy, because of a requirement to store and document material that was taken down:
By requiring complaints and measures to be documented and stored for an undisclosed amount of time, without providing further protection mechanisms against the misuse of such data, individuals become more vulnerable to State surveillance. These provisions also allow for the collection and compilation of large amounts of data by the private sector, and place a significant burden and responsibility on corporate actors to protect the privacy and security of such data.
That risk is particularly unfortunate, given Germany’s generally strong record on protecting privacy. As is often the case, once a precedent had been set, others were keen to follow. France began work on a similar law against “contenus haineux” – “hateful content” – in 2019. The law was finally passed in May of this year:
After months of debate, the lower house of Parliament adopted the controversial legislation, which will require platforms such as Google, Twitter and Facebook to remove flagged hateful content within 24 hours and flagged terrorist propaganda within one hour. Failure to do so could result in fines of up to €1.25 million.
But in a surprise development, France’s highest constitutional body, the Constitutional Council, has just struck down most of the law as being incompatible with the French constitution. Politico reported that the court ruled that the new law “undermines freedom of expression and communication in a way that is not necessary, adapted nor proportionate”.
That’s great news for freedom of speech in France, but the judgment’s importance transcends its immediate effect of striking down most of the bad features of this legislation. In particular, the comments by the Constitutional Council underline points made repeatedly by digital rights activists in both Germany and France, but ignored by politicians in those countries. The Court was worried about the difficulty of deciding what is “hateful content” within such a short deadline. It specifically noted the danger that under the new law, companies would have a strong incentive to delete any content that was flagged up to them, whether or not it was lawful. That’s completely different from a meticulous legal process before a judge to decide whether content is legal or not.
Those comments are important, because the European Union is currently working on a new Digital Services Act, which will seek to bring in EU-wide laws addressing important issues that include tackling the removal of unlawful content. Having both the German and French hate speech laws would have been powerful arguments for bringing in something similar for the whole of the EU. Now that the core of the French approach has been ruled unconstitutional, the argument that it is an established practice, and that there are already laws that can be used as a template, no longer holds.
One other comment by the Constitutional Court could prove important in this context. As Politico reports, the French court also said that the “obligations of means for platforms, including requirements to implement procedures, both human and technical, to ensure that notified content was processed as fast as possible” was unconstitutional. That essentially means automated filters, of the kind that the EU Copyright Directive effectively requires to stop unauthorized copyright material being uploaded. It would have been an obvious next step to extend those filters to block “hate” material – even though doing so would have made automated filters even more likely to block perfectly lawful posts. The latest ruling by the French court undermines the argument that general filters of this kind should be adopted under the Digital Services Act – something that is already controversial, in any case.
Featured image by gueluem.