Esperanza Miyake, Yureshya Perera and Sharifah Sekalala

Artificial intelligence (AI), an area with growing interest and often harped as the solution for many issues, has made significant inroads into national healthcare systems over the past decade. AI holds power to positively impact healthcare delivery and improve patient outcomes. However, there are growing concerns around the notion that focusing on AI to completely transform healthcare delivery for better access and efficient services holds very real risks. Recent research and evidence have shown that AI could inadvertently exacerbate existing health inequalities if not developed and deployed responsibly within appropriate areas and under proper regulations. 

This two-part roundtable brings together three researchers to discuss the role of AI in healthcare and the United Kingdom’s National Health Service (NHS), with a specific focus on its impact on marginalised communities and the regulations that could be imposed to safeguard the rights of these communities.

Dr. Esperanza Miyake, Chancellor’s Fellow and Senior Lecturer at the University of Strathclyde

Prof. Sharifah Sekalala, Professor of Global Health Law at the University of Warwick and Director of the Warwick Global Health Centre

Yureshya Perera, Researcher at the University of Warwick and Activist at the Amnesty UK Anti-Racism Network

In the first blog, we explore the use of AI in healthcare and its risks for marginalised communities, examining why AI alone is not the solution to healthcare challenges. The second blog examines the current approach to AI delivery and regulation in the UK healthcare system and discusses how AI could be implemented and regulated. Join the three of us as we navigate the complex and ambiguous landscape of AI in healthcare, discussing both its promises and pitfalls, and reimagine a more equitable approach to AI in healthcare.

***

Yureshya: Thank you for joining this discussion. Let us get started with two questions… Compared to existing healthcare technologies, what makes the challenges around AI in healthcare different and what is its role in exacerbating social inequalities?

Esperanza: The use of AI in healthcare is very different to the traditional technologies we have been using to run healthcare systems. However, there are similarities in the challenges, for example, similar to digital literacy with traditional technology AI literacy does exist. I think AI literacy will be an emerging and ongoing issue for a while: certain patient groups will already be more AI literate than others – or have the resources to ‘catch up’ – and thus understand the consequences of, and make informed decisions about, for example, having their health data collected for AI and machine learning training models. As AI technologies advance further, this inevitable gap in AI literacy between different demographics will only increase, which will ultimately have ramifications across all areas in health: from issues around trust in the health system, right through to who receives health provisions (and who does not), based on who is AI-(in)visible. 

Yureshya: Agreed. However, the difference to the traditional technology is that AI technology is only as good as the data it is fed, and let me add, the environment in which it is trained and developed in. The selling point of AI and its often-celebrated angle is its promise to create better health outcomes for populations based on data-driven decisions devoid of human limitations and errors. However, the impact of AI literacy as Esperanza discussed leads to communities not sharing their data by choice based on trust or being left out, which ultimately leads to biased data sets and biased outcomes. In addition to the gap in AI literacy, the histories of inequalities and bias which have existed over centuries have also tainted the very datasets we use. All AI algorithms need large datasets to learn from, but history has shown that several pockets of the population, such as people of colour, women, and LGBTIQ+ communities, have often been absent, omitted, or misrepresented in existing biomedical datasets, leading to data bias. When an algorithm is trained on biased and incomplete data, it is very likely to reinforce patterns from the dominant category of the data it was trained with, perpetuating inequality bias. This can result in dangerous outcomes such as misdiagnoses, lack of generalisation, and unequal resource distribution. 

Esperanza: I agree AI-driven bias in healthcare, stem from algorithmically driven decision-making process that rely on data gathered from majority and/or misrepresentative data, inevitably leading to detrimental risks to marginalised communities. Clear examples of this are the studies that found that algorithmic-bias had led to results falsely indicating that because Black patients were spending less on healthcare provisions (due to socio-economic inequalities), they needed less healthcare and the one which captured that medical equipment designed in ways that capture white patients more accurately, leading to algorithmic-driven decisions based on biases that are already at the point of clinical measurement. Both are examples of AI exacerbating inequalities in ways that lead to higher risks and mortality rates in marginalised communities because patients are either misdiagnosed or receiving less healthcare provisions. Marginalised groups will thus become doubly misrepresented, invisiblised and subject to higher risk in health and beyond

Sharifah: I really agree… AI systems trained on biased data can indeed produce unfair or discriminatory results, potentially amplifying existing societal biases and inequalities. However, the use of AI is not all doom and gloom. As social science researchers, it is important to acknowledge that AI also presents several benefits: it has the potential to revolutionise screening, may perform certain functions such as reading x-rays for radiology more accurately, and genomic databases could lead to personalised medicine for patients with rare diseases in the future. Nevertheless, challenges remain. One crucial position we need to advocate for is the rigorous evaluation of AI makers’ claims, as Yureshya mentioned, regarding its impact on health efficiency and outcomes. A comprehensive, evidence-based analysis must be conducted to verify these claims and to critically assess whether there is any existing evidence backing the claim that AI is the silver bullet. 

Esperanza: I think we really need to move away from the idea that Artificial Intelligence will be the solution for broken healthcare systems. For me, the recent public discourse around AI is very similar to the public discourse approximately 10-15 years ago around the ‘digital transformation’ of the public sector (e.g. ‘Digital by Default’ in the UK), which included the digitalisation of health. From m-health to e-health, the arguments are almost identical back then to what they are now around AI: the enthusiastic cheers were around how technologically mediated health practices and systems will save time, money, medical resources, and the environment, which in turn, will supposedly transform healthcare and society. Whilst the digitalisation of health did indeed increase access and improve healthcare for many, it also exacerbated existing health inequalities, and at times, was even weaponised to create new ways of necropolitical governance amongst those already marginalised. The COVID-19 pandemic was a good example of this process, where BAME communities were disproportionately affected due to mismanagement of data collection and infrastructural disparities. In this context, what I see with AI now is a more sophisticated and complex version of this same narrative and outcome. AI will indeed advance healthcare practices and systems in the areas of diagnosis, administration and patient care – cutting costs, time and labour – but will ultimately benefit the majority, because that is the nature of AI: a system designed to learn, analyse, predict and generate from the majority will only ever be truly for the majority, never for the minority.

Yureshya: Yes, we should be cautious of AI solutionism—the belief that AI can single-handedly solve all healthcare challenges. Relying solely on AI in healthcare can also lead to the dehumanisation of patient care. AI-driven systems may push doctors to prioritise algorithms and automated decisions over the lived experiences and values of their patients. This shift in focus can reinforce paternalism in healthcare, creating a dangerous scenario where the most vulnerable patients, often from marginalised communities, are further disempowered. In recent years, we have seen cases where AI overrides doctors’ own medical judgments. When layered onto existing power imbalances between clinicians and patients, AI introduces yet another level of paternalism. By treating AI as an authority that “knows best,” we risk silencing patients’ voices, especially those from marginalised backgrounds, who may have different preferences for their care. Furthermore, AI decisions are fallible—they can be based on flawed, incomplete, or biased datasets that clinicians may overlook. In such cases, AI could drive decisions that are not only impersonal but also harmful.

Sharifah: I believe that the dehumanisation of patient care could be mitigated if health resources are distributed and allocated efficiently and based on evidence. Despite the current fixation on AI, healthcare administrations must ensure that resources continue to be allocated to everyday healthcare, particularly for interventions that are proven effective and supported by sufficient evidence. For instance, research repeatedly demonstrates the importance of care and human connection in medical interventions. If used correctly, AI could ‘free up’ doctors’ valuable time by taking over tasks that require less nuance, allowing doctors to focus more on decisions that involve the ‘human’ aspects of care, such as lived experience. This consideration is crucial because there is a risk that the potential of future AI tools might lead to a reallocation of resources away from relatively inexpensive, well-established, evidence-based healthcare tools.

Esperanza: Yes, I believe this is an aspect that needs to be openly discussed with full transparency. Where is AI most effectively implemented, and how is it currently applied in diagnosis, administration, or clinical development? Additionally, how is it regulated to ensure that its implementation and use remain ethical? 

***

Esperanza’s questions are important, timely and address regulation and implementation, topics that will be covered in part 2 of this blog set, titled “AI in Healthcare: Inclusive Strategies for Equitable Implementation in the UK “

If you’d like to explore this topic further, the authors recommend the following readings:

——

Part of the SLSA Blog Series, Exploring the Intersections of Technology, Health, and Law, guest edited by Prof. Sharifah Sekalala and Yureshya Perera. Written as part of the project There is No App for This! Regulating the Migration of Health Data in Africa, funded by the Wellcome Trust (grant number: 224856/Z/21/Z).