By Ethan Shattock, Department of Law, Maynooth University

COVID-19 has raised numerous issues surrounding the ability of social media platforms to assuage concerns about whether self-regulation can effectively tackle false and harmful content online. This broader debate has often been steeped in questions surrounding the limitations of freedom of expression, a right that is guaranteed by the Charter of Fundamental Rights of the European Union under Article 11. Moreover, Under Article 10 of the European Convention of Human Rights (ECHR), the right to freedom of expression includes the right to “receive and impart information and ideas without interference.” While misinformation can pollute the information environment in which information and ideas are received, this must be balanced with interrelated and at times conflicting rights. For example, false information can often be seen as a threat to electoral security. Under Article 3 of Protocol 1 of the ECHR, the right to free elections is guaranteed. If citizens receive dubious and low quality information, and then vote on the basis of this information, electoral security can be compromised. In particular, electoral rights can be compromised if false information is targeted at vulnerable voters who then exercise electoral rights on the basis of false and misleading rumours. In this way, the challenge of misinformation must be seen as one that potentially implicates the right to free elections.

In recent months, the COVID-19 pandemic has been characterised by the growing recognition that medical and health-related false information also brings potential threats to public safety. This is also critical to protect from a fundamental rights perspective, as numerous articles of the ECHR entail limitations on freedoms that can be justified on the basis of protecting public health. In light of growing concerns about how digital platforms have failed to “flatten the curve” of misinformation online, the need to reassess the current method of self-regulation for false content is becoming increasingly prescient. At present, the European Commission issues non-binding and voluntary guidelines to technological “signatories” such as Twitter, Facebook, and Google through the European Commission’s Codes of Practice on Disinformation. The future of these codes is uncertain, with the Commission having acknowledged a potential scope for more direct regulatory intervention in combatting the spread of digital falsities. However, the development of the Facebook Oversight Board (FOB) represents a new form of technological self-regulation, aimed at making Facebook more accountable through the introduction of an independent, expert board that will make critical decisions on whether to remove or allow certain content on the platform.

Constructed as a “quasi-judicial” branch of Facebook, the FOB is built on the premise of making key decisions that balance the fundamental right to freedom of expression against competing interests such as “safety” and “veracity”. In spite of the ostensibly important need to protect fundamental rights, reference to specific human rights instruments, and how these should be interpreted and mediated by the FOB in reaching decisions, is virtually absent from the FOB’s two foundational documents, the Charter and the Bye-laws.

Comprised of 40 members, the oversight board will be financially empowered by a trust, which will enable the board to receive, assess, and make content-related decisions. Cases will be referred by both Facebook and its users, although only referrals from Facebook will initially be reviewed. These cases will be assessed on the basis of whether the required decision is both “significant” and “difficult”. Within the context of the FOB, “significant” means content that has “real-world impact, in terms of severity, scale, and relevance to public discourse.” Additionally, “difficult” refers to material and content that “is disputed” and where as a result “the decision is uncertain and/or the values are competing.” In light of the scale of the problem of misinformation, particularly when considering health related implications, vast amounts of posts containing COVID-19 and other forms of misinformation would fall into these two categories of being both “significant” and “difficult.” Accordingly, the critical question arises of whether Facebook’s new oversight board is a tool that is fit for purpose when combatting online falsehoods that threaten fundamental rights.

While there is a high likelihood that the FOB will have limited success in mitigating the harmful effects of unverified rumours, especially those that could threaten public health, a number of problems highlight how the board is likely to fall short as an effective solution to this growing problem.

A major problem is that the FOB does not even attempt to address content on WhatsApp. Throughout the numerous frantic and confused stages of COVID-19, WhatsApp, owned by Facebook, has been particularly instrumental in fueling misinformation. In response to widespread public criticism, the platform created a “coronavirus information hub”, that provides guidelines that urge users to “choose reliable sources of information” related to the pandemic. Limits were also put on the app’s forwarding function. However, these restrictions have not fully stopped the platform from being a hub of COVID-19 misinformation. This continues WhatsApp’s disconcerting track record that was evident in the run up to the 2018 Brazil Presidential elections, where numerous “fake news” stories flooded the app, often in favour of eventual winner Jair Bolsonaro. For these reasons, WhatsApp cannot realistically be treated merely as a messaging app, but as an influential communicative chain of misinformation, that is made more impenetrable by its end-to-end encryption and an inaccessible application programming interface (API) that prevents researchers from identifying important how and why misinformation goes viral and diffuses through the platform.

In addition to this WhatsApp gap, the FOB does not adopt an approach that is centered around identifying and preventing algorithmic techniques aimed at identifying vulnerable users to target with false information. Instead, it is oriented around a case-to-case and ultimately slow process of making decisions on one piece of content at a time. Disappointingly, this overlooks a key aspect that drives the spread of false and harmful content online, whereby users are systematically microtargeted with selective information. Moreover, the FOB can recommend policy changes to Facebook on the basis of individual decisions made. However, the decision of whether to advance and follow such recommendations, is entirely at the discretion of Facebook. This creates a problem whereby, although the FOB is touted to be an independent mechanism that can hold Facebook to account for its content, it lacks the robust powers to engage sweeping changes that fundamentally transform how the platform reduces commercial and political incentives to flood users’ news feeds with bad, false, and often harmful information. When considering the human rights that are very clearly at stake, the assertion that the FOB can effectively counter false information on Facebook may well in itself be a source of “fake news”.