On 28 April, without a plenary vote, the European Parliament adopted a widely criticized piece of online content regulation colloquially known as TERREG, which aims to prevent terrorism by restricting the dissemination of online content deemed terrorist by yet undetermined extrajudicial government bodies.
It is now on its way to becoming law in the European Union. As we enter the second year of the global pandemic, it is hardly the most headline grabbing development, flying under the radar of media outlets in most member states, cloaked in enough jargon to deflect attention from the broader audiences it will ultimately impact.
When the first draft of the regulation appeared in 2018, it rode in on a wave of public calls for action following a number of terrorist attacks in Europe in 2015-2016. From the start, the debate around it was infused with an underlying sense of urgency. But, like most fear-driven policy projects, it was ill-designed.
After much substantive work in the committees, which took on board many citizens’ and experts’ concerns, the previous European Parliament, shortly before its mandate expired, adopted an improved version of the regulation in April 2019.
Terrorism experts and researchers have been careful to point out, if only timidly, that there is lack of evidence of causality between exposure to online content and terrorism. The EU’s own agencies and bodies, some member states and many independent outside voices, such as UN Rapporteurs, and numerous civil society organisations, all expressed concerns about the pitfalls of its technical implementation as well as the risk of long-term social and political harm, were it to become EU policy.
Among other things, the regulation requires platforms to remove user content within one hour of being notified by a government. The image that this urgent removal imperative aims to evoke – a lit fuse which, if not stamped out immediately, will inevitably lead to a devastating explosion – resonates well.
However, it has little to do with real life processes of radicalisation or terrorist operations, or with human cognition and how the media influences it, for that matter. It is almost as if the policymakers were drafting it with that episode of Monty Python in mind in which a joke gets into circulation that is so lethal that no one can hear it without dying from fits of laughter.
These are no joking matters, however. The national legal frameworks in the EU recognise that some speech can be so dangerous that it needs to be curtailed. But any such restriction of expression requires judicial oversight, because the consequences are dire, and because in the absence of this, it can be abused. In France, a law proposing a similar one-hour removal deadline was recently ruled unconstitutional. Throughout TERREG’s legislative lifecycle, critics have described the very likely scenarios of misuse by less democratically minded governments. Further, parliamentary discussions on this topic make clear that the very nature of the terrorist threat in the EU is deeply contested, with Members of European Parliament emphasising divergent ideologies, causes, and groups depending on their place on the political spectrum and the country they represent.
While many of these damage-control amendments were kept in the final text, in the closed-door negotiations that followed the architects of TERREG were still determined to sacrifice due process on the altar of expediency. Sadly, the attacks last autumn in France and Austria offered justification to skip further debate and the regulation was adopted last week, foregoing even the previously announced plenary vote in the parliament. Platforms operating in the EU will now be obliged to make speech restriction decisions in an arbitrary one-hour window at the command of EU governments, without judicial oversight. What could go wrong?
Populist leaders in the EU are already restricting freedoms and dismantling institutions of democratic accountability. It is hard to see how the potential benefits of being able to sanitize the internet quickly outweigh the consequences of polluting the real world with more distrust and conflict, while giving budding European autocrats new tools to limit free speech. For those with contempt for democracy and the EU, this new legislation is welcome news.
But what about the rest of us? From the vantage point of citizen participation and citizens’ interests, the experience and outcomes of the TERREG process are bad news. The democratic process is about compromise, but collective problem-solving cannot be reduced to technocratic bargaining about procedural improvements, especially when the proposed solutions affect fundamental rights and freedoms.
After years of passing through the democratic meat-grinder in Brussels, the text of this regulation ends up largely favouring entrenched bureaucrats’ and governments’ interest, discarding important expert opinion, without any clear benefits to (and likely at the expense of) the public. We now have a regulation that is not an effective solution to terrorists’ use of the internet and online radicalisation in the EU, and carries great new risks for both democracy and citizens’ participation.
In April, the Commission, as part of its Conference on the Future of Europe, issued a call for citizens’ input into a host of policies, including terrorism. But after failing to listen to staunch defenders and practitioners of European democracy and actively enabling the suppression of their voices in the years to come, via TERREG, this call seems disingenuous. Team Europe, we have to do better.
Stefania Koskova is a European citizen who provided briefings and written comments on the TERREG to the Members of European Parliament. She has led multiple civil society projects both in the EU and the Balkans to address terrorism, hate, and disinformation in communities and on the internet, most recently the Resonant Voices Initiative. In 2018-2019 she was a Technology Policy Fellow with the Mozilla Foundation. She is also a Director at Groundscout.