On 17 October 2019, the European Parliament, the Council of the European
Union (EU) and the European Commission started closed-door negotiations,
trilogues, with a view to reaching an early agreement on the Regulation
on preventing the dissemination of terrorist content online.
Support our work with a donation!
The European Parliament improved the text proposed by the European
Commission by addressing its dangerous pitfalls and by reinforcing
rights-based and rights-protective measures. The position of the Council
of the European Union, however, supported the?proactive measures? the
Commission suggested, meaning potential ?general monitoring obligations?
and in practice, automated detection tools and upload filters to
identity and delete ?terrorist content?.
Finding middle ground
In trilogue negotiations, the parties ? the European Parliament,
Commission, and Council ? attempt to reach a consensus starting from
what can be very divergent texts. In the Commission?s and Council?s
version of the proposed Regulation, national competent authorities have
the option to force the use of technical measures upon service
providers. The Parliament, on the contrary, deleted all references to
forced pro-activity and thus, put in line the Regulation with Article 15
of the E-Commerce Directive that prohibits obligations on platforms to
generally monitor the user-generated content they host on their platforms.
Ahead of the negotiations, the European Commission was exploring the
possibility to suggest ?re-upload filters? instead of upload filters as
a way towards building a compromise. Also known as ?stay-down filters?,
these filters distinguish themselves from regular ones by only
searching, identifying and taking down content that has been already
taken down once. This is to ensure that a content that was first deemed
illegal would stay down and does not spread further online.
Upload or re-upload filters: What?s the difference?
?Re-upload filters? entail the use of automated means and the creation
of hash databases that contain digital hash ?fingerprints? of every
piece of content that hosting providers have identified as illegal and
removed. They also mean that all user-generated content published on the
intermediaries? services is monitored and compared with the material
contained in those databases, and is filtered out in case of a match. As
the pieces of content included in those databases are in most cases not
subject to a court?s judgment, this practice could amount to an
obligation of general monitoring, which is prohibited under Article 15
of the E-Commerce Directive.
Filters are not equipped to make complex judgments on the legality of
content posted online. They do not understand the context in which
content is published and shared, and as a result, they often make
mistakes. Such algorithmic tools do not take proper account of the legal
use of the content, for example for educational, artistic, journalistic
or research purposes, for expressing polemic, controversial and
dissident views in the context of public debates or in the framework of
awareness raising activities. They risk accidentally suppressing legal
speech, with exacerbated impacts on already marginalised individual
Human rights defenders as collateral damage
The way the hash databases will be formed will likely reflect
discriminatory societal biases. Indeed, certain types of content and
speech are getting more reported than others. The decision by the
platforms to characterise them as illegal and to add them to the
databases often mirrors societal norms. As a result, content related to
Islamic terrorism propaganda will be more likely targeted than white
supremacist content ? even in cases in which the former is actually a
documentation of human rights violations or is serving an
awareness-raising purpose against terrorist recruitment. Hash databases
of alleged illegal content are not accountable, transparent and
democratically audited and controlled and will likely disadvantage
certain users based on their ethnic background, gender, religion,
language, or location.
In addition, re-upload filters are easy to circumvent on mainstream
platforms: Facebook declared that it has over 800 distinct edits of the
Christchurch shooting video in its hash database because users
constantly modified the original material in order to trick automatic
identification. Lastly, hash databases and related algorithms are being
developed by dominant platforms, which have the resources to invest in
such sophisticated tools. Obliging all other actors on the market to
adopt such databases risks reinforcing their dominant position.
A more human rights compatible approach would follow the Parliament?s
proposal, in which platforms are required to implement measures ?
exclusive of monitoring and automated tools ? only after it received a
substantial number of removal orders and that do not hamper their users?
freedom of expression and right to receive and impart information. The
negotiating team from the European Parliament should defend the
improvements achieved after arduous negotiations with the Parliament?s
different political groups and committees. Serious problems, such as
terrorism, require serious legislation, and not technological solutionism.
Terrorist content online Regulation: Document pool
Open letter on the Terrorism Database (05.02.2019)
Terrorist Content Regulation: Successful ?damage control? by LIBE
Vice, Why Won?t Twitter Treat White Supremacy Like ISIS? Because It
Would Mean Banning Some Republican Politicians Too (25.04.2019)
(Contribution by Chlo? Berth?l?my, EDRi)