Published in

IOS Press, Studies in Health Technology and Informatics, 2022

DOI: 10.3233/shti210847

Links

Tools

Export citation

Search in Google Scholar

Hazards for the Implementation and Use of Artificial Intelligence Enabled Digital Health Interventions, a UK Perspective

Book chapter published in 2022 by Stuart Harrison, George Despotou ORCID, Theodoros N. Arvanitis
This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Background: Artificial Intelligence (AI) has seen an increased application within digital healthcare interventions (DHIs). DHIs use entails challenges about their safety assurance. Exacerbated by regulatory requirements, in the UK, this places the onus of safety assurance not only on the manufacturer, but also on the operator of a DHI. Clinical Safety claims and evidencing safe implementation and use of AI-based DHIs require expertise, to understand and act to control or mitigate risk. Current health software standards, regulation, and guidance do not provide the insight necessary for safer implementation. Objective: To interpret published guidance and policy related to AI and justify clinical safety assurance of DHIs. Method: Assessment of UK health regulation policy, standards, and AI institution insights, utilizing a published Hazard Assessment framework, to structure safety justifications, and articulate hazards relating to AI-based DHIs. Results: AI enabled DHI hazard identification, relating to implementation and use within healthcare delivery organizations. Conclusion: By application of the method, we postulate that UK research of AI DHIs highlighted issues that may affect safety, in need of consideration to justify safety of a DHI.