BEL: +32 (0)488 90 57 80 | IRL: +353 (0)87 7794892
Wexford & Antwerpen

Hate speech online: Lessons for protecting free expression

November 20, 2019 In Design News,Digital Freedom

On 21 October, David Kaye – UN Special Rapporteur on the promotion and
protection of the right to freedom of opinion and expression – released
the preliminary findings of his sixth report on information and
communication technology. They include tangible suggestions to internet
companies and states whose current efforts to control hate speech online
are failing to comply with the fundamental principles of human rights.
The EU Commission should consider Kaye?s recommendations when creating
new rules for the internet and ? most importantly ? when drafting the
Digital Services Act (DSA).

The ?Report of the Special Rapporteur to the General Assembly on online
hate speech? (docx) draws on international legal instruments on civil,
political and non-discrimination rights to show how human rights law
already provides a robust framework for tackling hate speech online. The
report offers an incisive critique of platform business models which,
supported by States, profit from the spread of ?hateful content? whilst
violating free expression by wantonly deleting legal content. Instead,
Kaye offers a blueprint for tackling hate speech in a way which empowers
citizens, protects online freedom, and puts the burden of proof on
States, not users. Whilst the report outlines a general approach, the
European Commission should incorporate Kaye?s advice when developing the
proposed Digital Services Act (DSA) and other related legislation and
non-legal initiatives, to ensure that the regulation of hate speech does
not inadvertently violate citizens? digital rights.

Harmful content removal: under international law, there is a better way

Sexism, racism and other forms of hate speech (which Kaye defines as
?incitement to discrimination, hostility or violence?) in the online
environment are quite rightly areas of attention for global digital
policy and law makers. But the report offers a much-needed reminder that
restricting freedom of expression online through deleting content is not
just an ineffective solution, but in fact threatens a multitude of
rights and freedoms that are vital for the functioning of democratic
societies. Freedom of expression is, as Kaye states, ?fundamental to the
enjoyment of all human rights?. If curtailed, it can open the door for
repressive States to systematically suppress their citizens. Kaye gives
the example of blasphemy laws: profanity, whilst offensive, must be
protected ? otherwise it can be used to punish and silence citizens that
do not conform to a particular religion. And others such as journalist
Glenn Greenwald have already pointed out in the past how ?hate speech?
legislation is used in the EU to suppress left-wing viewpoints.

Fundamental rules for restricting freedom of expression online

The report is clear that restrictions of online speech ?must be
exceptional, subject to narrow conditions and strict oversight?, with
the burden of proof ?on the authority restricting speech to justify the
restriction?. Any restriction is thus subject to three criteria under
human rights law:

Firstly under the legality criteria, Kaye uses human rights law to show
that any regulation of hate speech online (as offline) must be genuinely
unlawful, not just offensive or harmful. It must be regulated in a way
that does not give ?excessive discretion? to governments or private
actors, and gives independent routes of appeal to impacted individuals.
Conversely, the current situation gives de facto regulatory power to
internet companies by allowing (and even pressuring) them to act as the
arbiters of what does and does not constitute free speech. Coupled with
error-prone automated filters and short takedown periods incentivising
over-removal of content, this is a free speech crisis in motion.

Secondly on the question of legitimacy, the report outlines the
requirement for online hate speech laws and policies to be treated in
the same way as any other speech. This means ensuring that freedom of
expression is restricted only for legitimate interests, and not
curtailed for ?illegitimate purposes? like suppressing criticism of
States. Potential illegal suppression is enabled by overly broad
definitions of hate speech, which can act as a catch-all for content
that States find offensive, despite being legal. A lack of strict
definitions in the counter-terrorism policy field has already had a
strong impact on freedom of expression in Spain, for example. ?National
security? was  proven to be abusively invoked to justify measures
interfering in human rights, and used as a pretext to adopt vague and
arbitrary limitations.

Lastly, necessity and proportionality are violated by current moderation
practices including ?nearly immediate takedown? requirements and
automatic filters which clumsily censor legal content, becoming
collateral damage in a war against hate speech. This violates rights to
due process and redress, and unnecessarily puts the burden of justifying
content on users. Worryingly, Kaye continues that ?such filters
disproportionately harm historically under-represented communities.?

A rational approach to tackling hate speech online

The report offers a wide range of solutions for tackling hate speech
whilst avoiding content deletion or internet shutdowns. Guided by human
rights documents including the so-called ?Ruggie Principles? (the 2011
UN Guiding Principles on Business and Human Rights), the report
emphasises that internet companies need to exercise a greater degree of
human rights due diligence. This includes transparent review processes,
human rights impact assessments, clear routes of appeal and human,
rather than algorithmic, decision-making. Crucially, Kaye calls on
internet platforms to ?de-monetiz[e] harmful content? in order to
counteract the business models that profit from viral, provocative,
harmful content. He stresses that the biggest internet companies must
bear the cost of developing solutions, and share them with smaller
companies to ensure that fair competition is protected.

The report is also clear that States must take more responsibility,
working in collaboration with the public to put in place clear laws and
standards for internet companies, educational measures, and remedies
(both judicial and non-judicial) in line with international human rights
law. In particular, they must take care when developing intermediary
liability laws to ensure that internet companies are not forced to
delete legal content.

The report gives powerful lessons for the future DSA and other related
policy initiatives. In the protection of fundamental human rights, we
must limit content deletion (especially automated) and avoid measures
that make internet companies de facto regulators: they are not – and nor
would we want them to be – human rights decision-makers. We must take
the burden of proof away from citizens, and create transparent routes
for redress. Finally, we must remember that the human rights rules of
the offline world apply just as strongly online.

Report of the Special Rapporteur on the promotion and protection of the
freedom of opinion and expression, A/74/486 (Advanced unedited report)

E-Commerce review: Opening Pandora?s box? (20.06.2019)

In Europe, Hate Speech Laws are Often Used to Suppress and Punish
Left-Wing Viewpoints (29.08.2017)

EU copyright dialogues: The next battleground to prevent upload filters

Spain: Tweet… if you dare: How counter-terrorism laws restrict freedom
of expression in Spain (13.03.2018)

CCBE Recommendations on the protection of fundamental rights in the
context of ‘national security’ 2019

(Contribution by Ella Jakubowska, EDRi intern)