BEL: +32 (0)488 90 57 80 | IRL: +353 (0)87 7794892
Wexford & Antwerpen
info@haycom.eu

Content regulation – what’s the (online) harm?

October 20, 2019 In Design News,Digital Freedom

In recent years, the national legislators in EU Member States have been
pushing for new laws to combat negative societal phenomena such as
hateful or terrorist content online. These regulatory efforts have one
common denominator: they shift the focus from conditional intermediary
liability to holding intermediaries directly responsible for the
dissemination of illegal content on their platforms.

Two prominent legislative and policy proposals of this kind that will
significantly shape the European debate around the future of
intermediary liability are the UK White Paper on Online Harms and the
newly adopted Avia law in France.

UK experiment to fight online harm: overblocking on the horizon

In April 2018, the United Kingdom (UK) government proposed a new
regulatory model including a so-called statutory duty of care, saying it
wants to make platform companies more responsible for the safety of
online users. The paper foresees a future regulation that holds
companies accountable for a set of vaguely predefined ?online harms?
which includes illegal content, but also users? behaviours that are
deemed harmful but not necessarily illegal.

EDRi and Access Now have long emphasised the risk that privatised law
enforcement and heavy reliance on automated content filters pose to
human rights online. In this vein, multiple civil society organisations,
including EDRi members, have warned against the alarming measures the
British approach contains. To avoid liability, the envisaged duty of
care, combined with heavy fines, create incentives for platform
companies to block online content even if its illegality is doubtful.
The regulatory approach proposed by the UK Online Harms White Paper will
actually coerce companies into adopting content filtering measures that
will ultimately result in the general monitoring of all information
being shared on online platforms. Due to over-compliance with states?
demands, such conduct often amounts to illegitimate restrictions on
freedom of expression or, in other words, online censorship. Moreover, a
general monitoring obligation is currently prohibited by European law.

The White Paper also covers activities and content that are not illegal
but potentially undesirable such as advocacy of self-harm or
disinformation. This is highly problematic in regard to the human rights
law criteria that guide restrictions on freedom of expression. The
ill-defined and vague concept of ?online harms? cannot serve as a proper
legal basis to justify an interference with fundamental rights.
Ultimately, the proposal falls short in providing substantial evidence
that sustains its approach. It also bluntly fails to address key issues
of online regulation, such as content distribution on platforms that
lies in the core of companies? business models, opacity of algorithms,
violations of online privacy, and data breaches.

French Avia law: Another ?quick fix? to online hate speech?

Inspired by the German Network Enforcement Act (NetzDG), France has now
adopted its own piece of legislation, the so-called Avia law ? named
after the Rapporteur of the file, Member of the Parliament Laetitia
Avia. Similarly to NetzDG, the law requires companies to remove
manifestly illegal content within 24 hours from receiving a notification
about it.

Following its German predecessor, the Avia law encourages companies to
be overly cautious and pre-emptively remove or block content to avoid
substantial fines for non-compliance. The time frame in which they are
expected to take action is too short to allow for a proper assessment of
each case at stake. Importantly, the French Parliament does not discard
the possibility for companies to resort to automated decision-making
tools in order to process the notices. Such measure in itself can be
grounded in the legitimate objectives to fight against hatred, racism,
LGBTQI+-phobic and other discriminatory content. However, tackling hate
speech and other context-dependent content requires careful and balanced
analysis. In practice, leaving the decision to private actors without
adequate oversight and redress mechanisms to decide whether a piece of
content meets the threshold of ?manifest illegality? will be damaging
for freedom of expression and the rule of law.

However, there are also positive aspects of the Avia law. It provides
safeguards of the procedural fairness by establishing the requirement
for individuals who notify potentially illegal content to state the
reasons why they believe it should be removed. Moreover, the law sets
out obligations for companies to establish internal complaints and
appeal mechanisms for both the notifier and the content provider.
Transparency obligations on content moderation policies are also
introduced. Lastly, the regulator established by the Avia law does not
focus its evaluation solely on numbers of content removed but also on
scrutinising over-removal when monitoring compliance with the law.

Do not fall into the same trap!

We are currently witnessing regulatory efforts at the national and
European level that seek to provide easy solutions to online phenomena
such as terrorist content or hate speech, ignoring the underlying
societal issues. Most of the suggested solutions rely on filters and
content recognition technologies with limited ability to assess the
context in which a given piece of content has been posted. Proper
safeguards and requirements for meaningful transparency that should
accompany these measures are often sidetracked by legislators. However,
it is not only the EU and its Member States where similar trends can be
observed. For instance, the Australian government recently adopted a new
bill imposing criminal liability on executives of social media
platforms. Section 230 of the American Communication Decency Act (CDA)
may be placed under the review process triggered by a presidential
executive order that significantly limits the liability protections
granted to platform companies by the existing law.

Legislators around the globe have one thing in common: the urge to
?eradicate? vaguely defined ?online harms?. The rhetoric of danger
comprised in online harm has become a driving force behind regulatory
responses in liberal democracies. This is exactly the kind of logic
frequently used by authoritarian regimes to restrict legitimate debate.
With the upcoming Digital Services Act (DSA) potentially replacing the
E-Commerce Directive in Europe, the EU has an extraordinary opportunity
to become a trend-setter, establishing high standards for the protection
of users? human rights, while addressing legitimate concerns stemming
from the spread of illegal online content.

For this to happen, the European Commission should propose a law that
imposes workable, transparent and accountable content moderation
procedures and a functioning notice and action system on platforms. Such
positive examples of tackling platform regulation should be combined
with forceful actions against the centralisation of power over data and
information into the hands of few big tech companies. EDRi and Access
Now developed specific recommendations containing human rights
safeguards, which should be comprised in both content moderation
exercised by companies and State regulation tackling illegal online
content. The European Commission?s responsibility is to ensure
fundamental rights during the process of drafting any future legislation
governing intermediary liability and redefining content governance online.

For this to happen, the European Commission should propose a law that
imposes workable, transparent and accountable content moderation
procedures and a functioning notice and action system on platforms. Such
positive examples of tackling platform regulation should be combined
with forceful actions against the centralisation of power over data and
information into the hands of few big tech companies. EDRi and Access
Now developed specific recommendations containing human rights
safeguards, which should be comprised in both content moderation
exercised by companies and State regulation tackling illegal online
content. The European Commission?s responsibility is to ensure
fundamental rights during the process of drafting any future legislation
governing intermediary liability and redefining content governance online.

Access Now
https://www.accessnow.org/

Access Now?s human rights guide on protecting freedom of expression in
the era of online content moderation (13.05.2019)
https://www.accessnow.org/cms/assets/uploads/2019/05/AccessNow-Preliminary-Recommendations-On-Content-Moderation-and-Facebooks-Planned-Oversight-Board.pdf

E-Commerce review: Opening Pandora?s box? (20.06.2019)
https://edri.org/e-commerce-review-1-pandoras-box/

French law aimed at combating hate content on the internet (09.07.2019)
http://www.assemblee-nationale.fr/15/pdf/ta/ta0310.pdf

UK: Online Harms Strategy must ?design in? fundamental rights (10.04.2019)
https://edri.org/uk-online-harms-strategy-must-design-in-fundamental-rights/

UK?s Online Harms White Paper (04.2019)
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/793360/Online_Harms_White_Paper.pdf

(Contribution by Eli?ka P?rkov?, EDRi member Access Now, and Chlo?
Berth?l?my, EDRi)