Published in

Wiley, International Forum of Allergy & Rhinology, 2024

DOI: 10.1002/alr.23323

Links

Tools

Export citation

Search in Google Scholar

ChatGPT‐4 performance in rhinology: A clinical case series

This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Orange circle
Postprint: archiving restricted
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Keypoints Chatbot Generative Pre‐trained Transformer (ChatGPT)‐4 indicated more than twice additional examinations than practitioners in the management of clinical cases in rhinology. The consistency between ChatGPT‐4 and practitioner in the indication of additional examinations may significantly vary from one examination to another. The ChatGPT‐4 proposed a plausible and correct primary diagnosis in 62.5% cases, while pertinent and necessary additional examinations and therapeutic regimen were indicated in 7.5%–30.0% and 7.5%–32.5% of cases, respectively. The stability of ChatGPT‐4 responses is moderate‐to‐high. The performance of ChatGPT‐4 was not influenced by the human‐reported level of difficulty of clinical cases.