By Angela Paul, Northumbria University

Affect or emotion recognition is a recent phenomenon that can be observed amongst the mass deployment of facial recognition technologies worldwide by several governments. The process of affect recognition involves recording and coding an individual’s facial expressions in a video, with a view to assessing their emotions [1]. The European Union’s iBorderCtrl pilot project, funded by EU Horizon 2020, provided travellers with a risk score based on an automated behavioural analysis tool. This raises the question of how effectively a bespoke technology can understand human emotions, and the possible ethical implications of basing crucial decisions on perceived notions relating to an individual’s facial expressions, from a short video footage.

Taking the issues of Facial Recognition One Step Further

In 2020, the Court of Appeal ruled that the South Wales Police force’s use of live automated facial recognition technology was partly unlawful. The Court specifically referred to Article 8(2) of the European Convention on Human Rights, in order to emphasise that the risks to individual privacy when using this technology outweighs the national security interests of the police.[2] Facial recognition technologies are also widely criticised for their inability to correctly identify non-White faces, especially of women, and this has led to wrongful arrests.[3] Furthermore, the Council of Europe has recently published guidelines so as to ensure that facial recognition is deployed in a manner that respects the fundamental rights of individuals. In this publication, the Council of Europe has stated that affect recognition is risky.[4] Therefore, taking into consideration the controversies already surrounding facial recognition technologies, it is important to consider whether emotion recognition technologies further these ethical implications.

The Association for Psychological Science has published an in-depth study on the challenges of emotion recognition, and this study particularly evidences the differences that can occur in communication styles across different cultures.[5] In fact, an experiment conducted on iBorderCtrl itself resulted in a journalist being accused by the software of lying, even though all the answers they provided were true. Furthermore, the researchers behind iBorderCtrl have stated that a sample group used when testing iBorderCtrl was unbalanced, as there were fewer non-White and female participants.[6] It is thus questionable how the minute facial expressions provided by travellers at borders is a reliable and ethical method of detecting deception.

The Case of iBorderCtrl

Patrick Breyer, a member of the European Parliament, raised these ethical issues in his case against the European Commission’s Research Executive Agency (REA). In November 2018, Breyer requested the REA for access to fifteen documents concerning iBorderCtrl. However, the REA partially disclosed only two deliverables that were already publicly available. The undisclosed documents included a deliverable titled “ethics of profiling, the risk of stigmatization of individuals and mitigation plan”[7]; this title suggests that the document could be highly important in analysing the discriminatory effects that iBorderCtrl could have on individuals. The Commission cited Article 4(2) of Regulation No 1049/2001, which allows EU institutions to withhold documents which negatively impact their commercial interests, unless there is an overriding public interest.

A worrying revelation from the recent hearing of this case is that the confidential documents, that were analysed by the judges of the European Court of Justice, contained information relating to “ethnic characteristics”.[8] It has been widely recognised that the use of facial recognition technologies in law enforcement commonly results in discrimination based on ethnicity. Therefore, it is important that organisations developing technology for law enforcement, such as the REA, make documentation relating to ethnicity publicly available.

Future Perspectives

Without placing adequate safeguards and limitations on projects centred around mass surveillance and affect recognition, there is a severe impact on the rights of citizens. Thus, there is in fact an overriding public interest in making these documents publicly available. The judgement of this case could set a precedent for the importance of transparency regarding the research, and the implementation, of surveillance technologies in Europe. The European Commission has already began working on the next stage of iBorderCtrl, under the title TRESPASS, which combines affect recognition and behavioural analysis with further security checks.

My current research revolves around the use of UAVs in Law Enforcement, and there is a looming possibility that visual surveillance technologies, such as police drones, can be further combined with other controversial biometric technologies such as facial and emotion recognition. Ethical issues are unavoidable, and even doubled, when we combine controversial technologies with “deception detection”, which itself is based on unfounded understandings of facial expression.

 

[1] Daniel Thomas, ‘The camera that know if you’re happy – or a threat’ BBC (17 July 2018) <https://www.bbc.co.uk/news/business-44799239>.

[2] Edward Bridges v The Chief Constable of South Wales Police ([2020] EWCA 1058 (Civil Decision)), para 210. See also Convention for the Protection of Human Rights and Fundamental Freedoms (European Convention on Human Rights, as amended) (ECHR), Art 8.

[3] See for example: Amnesty International, ‘Ban Dangerous Facial Recognition Technology that Amplifies Racist Policing’ Amnesty (26 January 2021) <https://www.amnesty.org/en/latest/news/2021/01/ban-dangerous-facial-recognition-technology-that-amplifies-racist-policing/>; Kashmir Hill, ‘Another Arrest, And Jail Time, Due to A Bad Facial Recognition Match’ The New York Times (29 December 2020) <https://www.nytimes.com/2020/12/29/technology/facial-recognition-misidentify-jail.html>; Tim Simonte, ‘The Best Algorithms Struggle to Recognize Black Faces Equally’ Wired (22 July 2019) <https://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/>.

[4] Council of Europe, ‘Consultative Committee of Convention Guidelines on Facial Recognition’ (28 January 2021) Doc T-PD (2020) 03rev4.

[5] Lisa Feldman Barrett et al., ‘Emotional Expressions Reconsidered: Challenges to Inferring Emotion from Human Facial Movements’ (2019) 20 Psychological Science in the Public Interest, 1-68.

[6] Ryan Gallagher and Ludovica Jona, ‘We Tested Europe’s New Lie Detector for Travelers – and Immediately Triggered a False Positive’ The Intercept (26 July 2019) <https://theintercept.com/2019/07/26/europe-border-control-ai-lie-detector/>.

[7] European Commission, ‘Your Confirmatory Application Pursuant To Article 7(2) Of Regulation (EC) No 1049/2001 – Application For Access To Documents’ (Research Executive Agency 2019) <https://www.asktheeu.org/de/request/6091/response/20002/attach/3/REA%20reply%20Confirmatory%20request%20signed.pdf?cookie_passthrough=1>; also see Case T-158/19 Patrick Breyer v European Commission (2019).

[8] Natasha Lomas, ‘Orwelian AI lie detector project challenged in EU court’ TechCrunch (5 February 2021) <https://techcrunch.com/2021/02/05/orwellian-ai-lie-detector-project-challenged-in-eu-court/>