Skip to Content

Featured Research

Medical devices review calls for immediate action on unfair biases to prevent patient harm

A report published today details the findings of the Independent Review of Equity in Medical Devices.

It calls for urgent action around the development, testing and deployment of medical devices, including Artificial Intelligence (AI)-enabled medical devices, to prevent patient harm and the widening of health inequalities.

The review recommends that the government should start preparing now for the disruption to healthcare from the next generation of AI-enabled technologies if it is to minimise the risk of patient harm.

The Expert Review Panel led by Professor Dame Margaret Whitehead included Professors Frank Kee (Queen’s University, Belfast), Raghib Ali (Cambridge University), Enitan Carrol (The University of Liverpool and North West Clinical Research Network), and Chris Holmes (Turing Institute and Oxford University).

Set up in 2022 by the then Secretary of State for Health and Social Care, the review sought to establish the extent and impact of ethnic and other unfair biases in the performance of medical devices commonly used in the NHS. It was commissioned amid concerns that such biases may lead to suboptimal treatment for the affected groups in the population.

The review focused on three types of medical devices where evidence suggested that the potential for harm was substantial. These were optical devices such as pulse oximeters, AI-enabled devices and certain genomics applications, such as polygenic risk scores.

The expert panel found evidence that pulse oximeters – widely used during the Covid-19 pandemic to monitor blood oxygen levels – were not as accurate for patients with darker skin. This could lead to delay in treatment if dangerously low oxygen levels in patients with darker skin tone were missed.

The review recommends mitigating actions in relation to pulse oximeters already in widespread use across the NHS and, since Covid-19, in homes all around the country. Further recommendations aim to prevent adverse impacts arising in new devices as they are developed.

One example of patient harm identified was the potential under-diagnosis of skin cancers for people with darker skin when using AI-enabled devices. This is as a result of machines being trained predominantly on images of lighter skin.

The report also outlines how women and those in disadvantaged socio-economic conditions (which can affect power, exposure to health hazards, employment, and access to healthcare) are also disproportionally affected by these inequities. For example, during testing of new medical devices, these groups are often underrepresented in recruitment into clinical trials and device evaluations.

Commenting on the review findings, Professor Frank Kee from the Centre for Public Health at Queen’s University Belfast, said: “We have a duty to prepare the next generation of health care professionals to be alert to the implications for inequality of new devices and technologies and to take actions across the lifecycle of device development, testing and deployment to ensure fair outcomes for all groups in society.”

The University of Liverpool’s Professor Dame Margaret Whitehead, Chair of the Review, said: "The advance of AI in medical devices could bring great benefits, but it could also bring harm through inherent bias against certain groups in the population, notably women, people from ethnic minorities and disadvantaged socio-economic groups.

"Our review reveals how existing biases and injustices in society can unwittingly be incorporated at every stage of the lifecycle of AI-enabled medical devices, and then magnified in algorithm development and machine learning.

“Our recommendations therefore call for system-wide action by many stakeholders and now need to be implemented as matter of priority with full government support.”

Media

Media inquiries to Sian Devlin at s.devlin@qub.ac.uk 

Share