Published in

BMJ Publishing Group, Journal of Medical Ethics, p. jme-2022-108831, 2023

DOI: 10.1136/jme-2022-108831

Links

Tools

Export citation

Search in Google Scholar

Clinicians and AI use: where is the professional guidance?

Journal article published in 2023 by Helen Smith ORCID, John Downer, Jonathan Ives ORCID
This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

With the introduction of artificial intelligence (AI) to healthcare, there is also a need for professional guidance to support its use. New (2022) reports from National Health Service AI Lab & Health Education England focus on healthcare workers’ understanding and confidence in AI clinical decision support systems (AI-CDDSs), and are concerned with developing trust in, and the trustworthiness of these systems. While they offer guidance to aid developers and purchasers of such systems, they offer little specific guidance for the clinical users who will be required to use them in patient care.This paper argues that clinical, professional and reputational safety will be risked if this deficit of professional guidance for clinical users of AI-CDDSs is not redressed. We argue it is not enough to develop training for clinical users without first establishing professional guidance regarding the rights and expectations of clinical users.We conclude with a call to action for clinical regulators: to unite to draft guidance for users of AI-CDDS that helps manage clinical, professional and reputational risks. We further suggest that this exercise offers an opportunity to address fundamental issues in the use of AI-CDDSs; regarding, for example, the fair burden of responsibility for outcomes.