OHCHR | Moderating online content: fighting harm or silencing dissent?
In recent months, the world has seen growing criticism levelled against social media companies regarding how they moderate user content.
These companies often face critical human rights dilemmas: aggressively combating what is viewed as harmful content risks silencing ‘protected speech’: speech that, under international law, should be permitted. Intervening with or removing content affects the rights to freedom of expression and privacy, and can easily lead to censorship.
Faced with the need to do more to ensure accountability, many governments have started to regulate online content. Some 40 new social media laws have been adopted worldwide in the last two years. Another 30 are under consideration.
It’s a worrying trend, according to UN Human Rights, and has immense consequences for public debate and participation.
For Peggy Hicks, Director of Thematic Engagement for UN Human Rights, nearly every country that has adopted laws relating to online content has jeopardised human rights in doing so.
“This happens because governments respond to public pressure by rushing in with simple solutions for complex problems,” said Hicks, speaking at a press briefing last week. “Additionally, some governments see this legislation as a way to limit speech they dislike and even silence civil society or other critics.”
The only way to address these challenges is to adopt a human rights-based approach, she said.
“We need to sound a loud and persistent alarm, given the tendency for flawed regulations to be cloned, and bad practices to flourish.”
New laws a cause for concern
In the aftermath of the torrent of racist abuse directed towards Black English football players following the UEFA Euro 2021 final, calls have increased in the United Kingdom to implement new online legislation, and quickly.
But the draft Online Safety Bill, which was tabled in May, would make provisions that are likely to lead to the removal of significant amounts of protected speech.
Similarly, in India, in the wake of some serious incidents of online incitement to violence, the government in February unveiled the Guidelines for Intermediaries and a Digital Media Ethics Code. While the new rules entail some useful obligations for companies related to transparency and redress, a number of provisions raise significant concern.
Under the new rules, for example, non-judicial authorities have the power to request quick take-downs and the companies and their staff face expanded liability risks for failure to comply. Moreover, the rules threaten to undermine secure end-to-end encryption. Several tensions in this regard have already surfaced and have been brought before Indian courts.
UN human rights experts have also expressed concerns on new and draft laws in other countries including Australia, Brazil, Bangladesh, France, Singapore and Tanzania.
Hicks adds that laws such as these often suffer from similar problems, with poor definitions of what constitutes unlawful or harmful content. “There is an over-emphasis on content take-downs, limited judicial oversight and an over-reliance on artificial intelligence or algorithms.”
Social media shut-downs
The flagging of content by social media companies has also led to some drastic responses by governments, including major disruptions. Last month, the Nigerian government announced the indefinite suspension of Twitter after the platform deleted a post from President Buhari’s account saying it violated company policies.
Within hours, Nigeria’s major telecommunications companies had blocked millions from accessing Twitter, and Nigerian authorities threatened to prosecute anyone who bypassed the ban.
For UN Human Rights, shutdowns like this matter because they restrict people’s ability to access information, also affecting other rights including work, health and education. They also have massive economic costs and undermine development.
“But it is not only a question of rights and law – having transparent rules around content moderation and ensuring different views are reflected is also a question of trust in institutions – one of the most precious commodities in democratic societies,” she says.
The EU’s Digital Services Act: an opportunity to regulate well
Decisions made in the European Union in the coming months could have an impact on digital policy globally, according to UN Human Rights. The EU is currently considering the Digital Services Act, the draft of which has some positive elements: it is grounded in human rights language, it contains clear transparency requirements for platforms, and it was drafted using a participatory process.
However, for UN Human Rights, some contradictory signals remain, including the risk that over-broad liability will be imposed on companies for user-generated content, and that there will be limited judicial oversight. There is also room to bring more voices to the table in the drafting process.
Hicks urged caution as the process unfolds: “When democracies start regulating, there is a ripple effect across the world, and other countries may follow. The internet does not have borders - we need to aim for a global digital space where it is safe for people to exercise their rights.”
Five actions for a way forward
To address the dilemmas of regulation and moderation of online content, UN Human Rights has proposed five actions for States and companies to consider.
First, UN Human Rights urges that the focus of regulation should be on improving content moderation processes, rather than adding content-specific restrictions.
For example, when faced with complex issues, people should be making the decisions, not algorithms.
Second, restrictions imposed by States should be based on laws, they should be clear, and they should be necessary, proportionate and non-discriminatory.
Third, companies need to be transparent about how they curate and moderate content and how they share information, and States need to be transparent about their requests to restrict content or access users’ data.
Fourth, users should have effective opportunities to appeal against decisions they consider to be unfair, and independent courts should have the final say over lawfulness of content.
Finally, civil society and experts should be involved in the design and evaluation of regulations.
Companies can and must do better
Social media companies are often criticised both for failing to take down harmful content, and also for when they actually do so.
In either case, there are few channels for people to use to address their concerns. As an example, during the recent upsurge in violence in Israel and the Occupied Palestinian Territory in May, Palestinian voices were disproportionately undermined by social media company content moderation practices, and there were limited avenues for challenging take-down decisions. Instagram acknowledged problems with its automated curation systems.
The UN Guiding Principles on Business and Human Rights stipulate that all companies have a responsibility to respect human rights.
Companies can and must do much more to be transparent and to provide effective and accessible redress channels, says Hicks. People being targeted by incitement online, as well as those being censored online, both need to have objective and clear responses to their concerns.
“We face competing visions for our privacy, our expression and our lives, spurred on by competing economies, and competing businesses,” says Hicks.
“Companies and States alike have agreed to respect human rights. Let’s start holding them to that.”
23 July 2021
Add new comment