Published in

Wiley, Bioethics, 2023

DOI: 10.1111/bioe.13222

Links

Tools

Export citation

Search in Google Scholar

Artificial intelligence in clinical decision‐making: Rethinking personal moral responsibility

Journal article published in 2023 by Helen Smith ORCID, Giles Birchley ORCID, Jonathan Ives ORCID
This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Orange circle
Postprint: archiving restricted
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

AbstractArtificially intelligent systems (AISs) are being created by software developing companies (SDCs) to influence clinical decision‐making. Historically, clinicians have led healthcare decision‐making, and the introduction of AISs makes SDCs novel actors in the clinical decision‐making space. Although these AISs are intended to influence a clinician's decision‐making, SDCs have been clear that clinicians are in fact the final decision‐makers in clinical care, and that AISs can only inform their decisions. As such, the default position is that clinicians should hold responsibility for the outcomes of the use of AISs. This is not the case when an AIS has influenced a clinician's judgement and their subsequent decision. In this paper, we argue that this is an imbalanced and unjust position, and that careful thought needs to go into how personal moral responsibility for the use of AISs in clinical decision‐making should be attributed. This paper employs and examines the difference between prospective and retrospective responsibility and considers foreseeability as key in determining how personal moral responsibility can be justly attributed. This leads us to the view that moral responsibility for the outcomes of using AISs in healthcare ought to be shared by the clinical users and SDCs.