Dissemin is shutting down on January 1st, 2025

Published in

MDPI, Journal of Clinical Medicine, 14(10), p. 3101, 2021

DOI: 10.3390/jcm10143101

Links

Tools

Export citation

Search in Google Scholar

Attitudes towards Trusting Artificial Intelligence Insights and Factors to Prevent the Passive Adherence of GPs: A Pilot Study

This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Green circle
Published version: archiving allowed
Data provided by SHERPA/RoMEO

Abstract

Artificial Intelligence (AI) systems could improve system efficiency by supporting clinicians in making appropriate referrals. However, they are imperfect by nature and misdiagnoses, if not correctly identified, can have consequences for patient care. In this paper, findings from an online survey are presented to understand the aptitude of GPs (n = 50) in appropriately trusting or not trusting the output of a fictitious AI-based decision support tool when assessing skin lesions, and to identify which individual characteristics could make GPs less prone to adhere to erroneous diagnostics results. The findings suggest that, when the AI was correct, the GPs’ ability to correctly diagnose a skin lesion significantly improved after receiving correct AI information, from 73.6% to 86.8% (X2 (1, N = 50) = 21.787, p < 0.001), with significant effects for both the benign (X2 (1, N = 50) = 21, p < 0.001) and malignant cases (X2 (1, N = 50) = 4.654, p = 0.031). However, when the AI provided erroneous information, only 10% of the GPs were able to correctly disagree with the indication of the AI in terms of diagnosis (d-AIW M: 0.12, SD: 0.37), and only 14% of participants were able to correctly decide the management plan despite the AI insights (d-AIW M:0.12, SD: 0.32). The analysis of the difference between groups in terms of individual characteristics suggested that GPs with domain knowledge in dermatology were better at rejecting the wrong insights from AI.