Dark Side of Free Speech
Dark Side of Free Speech
LONDON: Taking little time to rest after a contentious campaign, President Barack Obama heads to Burma, acknowledging budding human rights and a government, after more than two decades, now eager for trade and open to new connections with the West.
For example, Burma, known as Myanmar since 1989, has among the lowest mobile-phone penetration in the world, about 5 percent of the population. Better networks, less expensive phones, offer an enormous upside in human rights terms, provided that the freedom of expression and privacy of users are respected. Phones are now the most common way people go online in most developing economies, many for the first time.
Greater freedoms that come from such a shift can represent new dimensions to old challenges though. Many of the suppressed voices uphold democratic values and human rights; others revive ancient hatreds. Freedom of expression online inevitably faces limits, and the international covenant for civil and political rights permits some limits. The question is, how will these be set in a way consistent with international human rights principles and standards?
Recent examples highlight the issues. International attention recently focused on the violence against the Rohingya Muslim population in Rakhine State, formerly Arakan, in western Myanmar. Last month, satellite imagery obtained by Human Rights Watch showed extensive destruction of homes and other property in a predominantly Rohingya Muslim area of the coastal town of Kyauk Pyu – one of several areas of new violence and displacement. Websites promoting anti-Rohingya hate speech have been identified as one of the means to incite violence.
Another example involves the tragedy in Norway in which a self-confessed fan of extremist websites and a user of social media killed 77 young adults in order to market his manifesto of xenophobia. Racial hatred and religious intolerance are, of course, not new. The media’s role in inciting violence, from Nazi Germany to the fall of Yugoslavia and the Rwandan genocide is well known. Policymakers are faced with difficult choices: whether some aspects of internet content should be censored, including self-censorship by content providers, and what rules and processes might be developed that not only protect freedom of expression but also the rights of those most vulnerable to abuse.
Arguably the internet’s greatest strengths are also its greatest weaknesses: Mass data stripped from its normal context can justify just about any opinion and aid any lone voice in finding the likeminded, regardless of how extreme the view. Such views can serve as the spark igniting new violence as was seen in the anti-Islamic film trailer posted on YouTube, which triggered protests throughout the region.
The internet provides a permanent and interactive archive of speech and opinion unprecedented in human history – old opinions can be recycled within an instant, and global feedback loops are created; local acts can gain global significance. Technology acts to legitimize individual thoughts, breaking the divide between private and public realms. If technology can help foster the Arab Spring, can it also feed a European Winter?
In the past, intermediaries in the media were trusted to sift, select and present information. So, too, in the internet age. But the websites of many trusted media outlet permit unmoderated comment sections, and that includes giving space for a range of extreme opinions to be expressed. In the past, letters to the editor were the primary mass outlet for such expression. Editors had the final say on what views were heard. Now the discourse is largely unedited. The economic model for the media in the internet age is to attract more “hits” on their sites, encouraging some media outlets to seek polarized discourse to increase website traffic. Old-fashioned editing of published material is necessary – not to suppress dissenting views, but to ensure that statements inciting violence, like libelous statements, are not published. .
A related issue concerns transparency of censorship and self-censorship on the internet. With the exception of Google and Twitter, most internet service providers do not disclose when they remove content. Nor do most report on requests from governments or the public to remove content or when they accede to these requests. In 2011, mobile telephone companies closed the network in Egypt at the height of the struggle against the old regime, and then issued pro-regime texts without informing customers they were under orders to do so. The companies involved may have had no choice, but there were human rights implications in each case. Did the companies foresee human rights risks when they entered into joint venture agreements with oppressive governments? How much mitigation should we expect from companies when the risk to individuals is clear? Rules are needed, due diligence is essential and companies need to be transparent about the risks and more open about the mitigation to minimize negative human rights impacts.
Governments can’t be trusted to regulate the internet on their own. But the status quo is not acceptable either, if hate speech that incites violence finds a home online, rapidly disseminated, and goes unchallenged. Some companies have begun to explore the issues. The Global Network Initiative, which brings Microsoft, Yahoo and Google together with a range of civil society organizations and academics, is one example of multi-stakeholder collaboration on online human rights dilemmas. Perhaps similar collaboration is needed on issues of online content – where both governments and companies that publish news and commentary on the internet, including traditional and new media companies, follow common rules of disclosure in relation to actions they take regarding the content they remove, directions they follow, services they suspend, and cooperation they offer to judicial and governmental agencies. This would be analogous to how governments and extractive companies reconcile and publish revenues to disclose payments to minimize corruption. A common set of rules is urgently needed – agreeable to all, privileging none.
Context is everything. National responses to protecting freedom of expression online are inadequate and prone to short-term pressures towards censorship. An international framework for protecting freedom of expression online, as well as principles for deciding when censorship is permissible is needed. As Rebecca MacKinnon writes in Foreign Policy, many voices remain unrepresented in discussions about internet policy setting, particularly the poor and marginalized, and the development of rules, even in a so-called multi-stakeholder setting, must be premised by core human rights principles.
Within the United Nations family, some useful building blocks are in place: The former Special Rapporteur on Freedom of Expression has stressed that accessing the internet is key to enabling many human rights in the modern world, in line with the right to seek receive, and impart information. The Special Representative on Business and Human Rights provided a three-pillar framework, which establishes the state responsibility to protect human rights, an independent corporate responsibility to respect human rights, and the need for remedies where governance gaps exist.
To give practical meaning to the framework at the implementation stage, the European Commission is developing guidance for information and communication technology companies on due diligence relating to the human rights impact of their operations.
Freedom of expression is an enabling right, without which many other human rights cannot be enjoyed or protected. But even this right does not stand alone. There are times when limits can be placed on the right. Without international rules, will our leaders keep freedom of expression in mind when tested by challenging national developments?
John Morrison is executive director of the Institute of Human Rights and Business.