Esperanza Miyake, Yureshya Perera and Sharifah Sekalala

As the race for AI technology intensifies within the Global North, the UK has not taken a backseat. Research in AI and its implementation have spread into sectors such as finance, healthcare, education, and security. Our first roundtable blog, titled AI in Healthcare (Part 1/2): Unveiling the Risks for Marginalized Populations explores the positives and pitfalls of AI in healthcare, with a focus on its role in creating and exacerbating inequalities among marginalized populations. In the first blog, we discuss how, despite AI’s promises, these benefits may not always materialize due to factors such as AI illiteracy and biases within AI systems and data sets. These risks could perpetuate or even worsen inequalities, leading to misdiagnoses and unequal resource distribution that disproportionately affect marginalized communities.

Taking these concerns into account, in this roundtable blog, we explore the use of AI within the NHS in the UK. As a diverse society and a melting pot of cultures, the UK presents unique challenges and opportunities in applying AI equitably. Three researchers from different disciplines come together in this blog to analyse the NHS’s current approach to AI and offer recommendations and considerations to ensure AI benefits all, particularly in areas where it can have the greatest positive impact.

Dr. Esperanza Miyake, Chancellor’s Fellow and Senior Lecturer at the University of Strathclyde 

Prof. Sharifah Sekalala, Professor of Global Health Law at the University of Warwick and Director of the Warwick Global Health Centre 

Yureshya Perera, Researcher at the University of Warwick and Activist at the Amnesty UK Anti-Racism Network 

***

Esperanza: Do you think the NHS is sufficiently overseeing the use of AI in health care services and what are your thoughts on their regulatory approach to AI?

Yureshya: The ‘Transformation Directorate’ at NHS England leads the digital transformation journey in the NHS and social care. Under this entity falls the NHS AI Lab, which acts as a platform for collaboration to address the barriers to developing and deploying AI systems in healthcare. The Lab aims to balance the AI revolution in healthcare with ensuring guidance and regulations to protect patients. The NHS AI Lab has various programs that drive its work around AI. For example, the AI Regulation program, led by the AI and Digital Regulation Services, explores ways of improving the regulatory approval process and building trust in robust standards. However, none of the programs or the website mentions addresses the need to balance AI with other healthcare solutions, phasing away from AI in human-centred care, researching alternate solutions to healthcare problems, or taking the time to develop an ethical approach to AI. These programs and their work align and feed into the national strategy for AI, where the aim is for the UK to race to the top, to be at the cutting edge of AI development.

Esperanza: Yes, absolutely – agree with this. I think this is where the involvement of the private sector sits uncomfortably in the equation. It is important to remember that motivations, values, and definitions surrounding the public—especially in healthcare—differ significantly from those in the private sector. The focus in healthcare is more human rights-oriented, moving away from capitalistic agendas.

I remember when reading the NHSX Artificial Intelligence: How to get it right report, two leadings areas of concern which seem to inform their guidance around ‘getting AI right’ in healthcare standing out. The first area of concern outlined, relates to wider issues around general health data governance, which includes considerations such as consent, privacy, surveillance, and patient safety. The second area of concern outlined, relates to more specific issues around AI and algorithmic-driven healthcare practices and processes (‘Specific Algorithmic Considerations’), which includes unreliable evidence (leading to misdiagnosis, for example), unfair outcomes (e.g. profiling and discrimination) and loss of oversight (e.g. failures around safety and effectiveness). I think the difficulty lies in putting these two concerns into dialogue with one another, something that the report itself highlights later on: how the lack of a unified single body that can oversee how processes are regulated, not to mention the lack of an all-encompassing set of regulations, can become a challenge. How are we to, for example, ensure the very regulatory compliance measures that are put in place to make AI safe – for example, the ‘post-market surveillance’ – in themselves do not become the very means for creating unfair outcomes, including ‘profiling and discrimination’, which they highlight as potential areas of concern?

Yureshya: That is indeed true. The more I think about this the more you end up in a loop. The NHS AI Lab is also home to the AI Ethics Initiative which focuses on countering potential health inequalities that could arise from how AI is developed and deployed NHS England. These efforts are very valuable and required; however, the notion they advocate for is that the inequalities resulting from AI-powered healthcare solutions can be prevented. This brings me to the grim question, whether this is possible in a digitally divided world built on racism, colonialism, and oppression. 

Esperanza: …and this goes back to the argument about how we can never really remove the human from AI, despite the discourses surrounding that state otherwise. Human traits and histories do get imprinted in the DNA of AI.

Yureshya: On a hopeful note, when developing and deploying AI in healthcare, what ethical principles should guide decision-making, and have they been incorporated by the NHS?

Esperanza: The NHSX report draws on various ethical frameworks and guiding principles, such as those from the Nuffield Council on Bioethics and the Code of Conduct for Data-Driven Health and Care Technology. This multi-directional approach is essential, and includes guidelines around ethical guidance around datasets, reliability, and safety; and ethical guidance around more moral and social issues (for example, “respect for persons, respect for human rights, participation, accounting for decisions,” as according to the Nuffield Council on Bioethics guidelines). But as mentioned earlier, the challenge is in ensuring that these two areas are in harmony with each other, which I suspect will not always be easy. For example, what is considered as a clinically safe process in terms of datasets, may not necessarily consider human rights-based issues?

Furthermore, I think the difficulty arises in how these principles also work in practice, as there is an underlying assumption that ‘the public’ or ‘patients’ are one homogenous group. For example, Principle 7 from the Code refers to ‘the ethical examination of how the performance will be validated,’ whilst Principle 8 refers to how it is necessary to ‘generate evidence of effectiveness for the intended use and value for money.’ My question would be: ethical for whom? Effective for whom? How are we to ensure measurement of these are truly inclusive? 

Yureshya: I agree. I also believe that principles should commit to the fair use of AI for all. By this, I also mean acknowledging the existing racial biases that have shaped decades of data, which are embedded in various social, economic, and cultural contexts, as well as within developers and big tech companies (who are predominantly white). Entities cannot take a colourblind approach to AI initiatives. Instead, they must be consciously anti-racist, inclusive, and ensure that no marginalised communities or people of colour are left behind.

I believe the NHS AI Labs are making efforts to ensure AI-powered healthcare services are both beneficial and accessible to everyone. What additional steps do you think could be taken to make the distribution of these benefits more equitable?

Sharifah: For the public to trust AI technology in the NHS, and to ensure their protection, there must be robust and transparent regulation around an AI strategy. Currently, the NHS’s AI strategy feels somewhat passive. Given the rapid development of AI, it is crucial that the NHS adopts a more proactive regulatory approach by regularly updating guidance on AI usage within the organization. To keep pace with these changes, various legislative approaches could be implemented, such as principles-based regulations, standing committees or commissions to resolve complex cases, and providing ongoing guidance. Although these are mentioned in the report, it is essential that entities not only acknowledge these strategies but also actively and continuously implement them.

Esperanza: Another required effort is ways to bring the AI literacy gap. The NHSX AI: How to get it right report  highlights the challenge of meaningful communication with patient groups about AI use in healthcare. One suggestion is to collaborate with third-party patient groups and advocates to ensure transparent communication about algorithmic decision-making. While this could be effective, the rapid development of AI will likely widen the AI literacy gap between different patient groups. Those with more resources will be able to continuously train staff and educate their service users, while groups with fewer resources may struggle to keep up. This growing disparity could lead to skewed data, as under-resourced populations may underuse AI-driven services. As a result, AI-facilitated inequalities could worsen, increasing distrust due to the lack of accessible and transparent information. Therefore, to improve AI literacy within communities, the NHS can look into partnering with local community organisations or creating awareness through traditional media streams. 

Sharifah: I agree, and including marginalized individuals in decision-making processes would be highly beneficial. Better representation of structurally disadvantaged groups in AI governance and policymaking within the NHS would lead to programs with a more widespread positive impact. To ensure that structurally marginalized groups are not harmed, it is crucial to involve individuals with lived experiences of discrimination and academics who research these groups in decision-making processes. This broader representation will enable the NHS to better identify and address potential biases.

Esperanza: Furthermore, NHS (and more widely, the UK Government’s general AI Strategy) also places an emphasis around their ‘Commercial Strategy’ (NHSX, 2019: 36), with an active and conscious move to work with commercialised systems and partners. The tensions surrounding the flow of data between private and public sectors is nothing new, and questions around, for example, privacy and data access, have been ongoing in the healthcare sector predating recent AI initiatives (e.g. NHS’s National Data Opt-Out). Whilst working with commercial partners has been and can be crucial in advancing healthcare, from the public’s perspective and for engendering trust, I suspect the public will want to know why private companies would be involved, for example, in accessing their health data – this is magnified by the fact that we are dealing with AI tools, something which is known to currently induce feelings of distrust and even anxiety. This is no longer just about dealing with public trust towards the health services, it is also about dealing with public trust towards AI.

Yureshya: I agree… Transparency around private company partnerships is important. There is a critical need for the NHS to employ a due diligence framework around the contractors of their technology, as the values of these private entities often do not align with the values of public health. Additionally, contracting with such private entities can prevent marginalised communities from accessing digital health or AI systems. For example, Palantir’s involvement with the NHS has been widely criticised due to their role in human rights violations across the globe. Despite resistance from healthcare workers and human rights organisations, the NHS continued with the agreement. Therefore, to create better public trust and include marginalised communities in the healthcare system, it is essential to establish a robust due diligence framework within the NHS for private and external contractors.

***

We thank you for joining us in this important discussion. If you’d like to explore this topic further, we recommend the following readings:

Part of the SLSA Blog Series, Exploring the Intersections of Technology, Health, and Law, guest edited by Prof. Sharifah Sekalala and Yureshya Perera. Written as part of the project There is No App for This! Regulating the Migration of Health Data in Africa, funded by the Wellcome Trust (grant number: 224856/Z/21/Z).