Dissemin is shutting down on January 1st, 2025

Published in

Human Behavior and Emerging Technologies, 1(2024), 2024

DOI: 10.1155/2024/1119816

Links

Tools

Export citation

Search in Google Scholar

Assessing the Performance of ChatGPT 3.5 and ChatGPT 4 in Operative Dentistry and Endodontics: An Exploratory Study

This paper was not found in any repository; the policy of its publisher is unknown or unclear.
This paper was not found in any repository; the policy of its publisher is unknown or unclear.

Full text: Unavailable

Question mark in circle
Preprint: policy unknown
Question mark in circle
Postprint: policy unknown
Question mark in circle
Published version: policy unknown

Abstract

Background: Artificial intelligence is an innovative technology that mimics human cognitive capacities and has gathered the world’s attention through its vast applications in various fields.Aim: This study is aimed at assessing the effects of ChatGPT 3.5 and ChatGPT 4 on the validity, reliability, and authenticity of standard assessment techniques used in undergraduate dentistry education.Methodology: Twenty questions, each requiring a single best answer, were selected from two domains: 10 from operative dentistry and 10 from endodontics. These questions were divided equally, with half presented with multiple choice options and the other half without. Two investigators used different ChatGPT accounts to generate answers, repeating each question three times. The answers were scored between 0% and 100% based on their accuracy. The mean score of the three attempts was recorded, and statistical analysis was conducted.Results: No statistically significant differences were found between ChatGPT 3.5 and ChatGPT 4 in the accuracy of their responses. Additionally, the analysis showed high consistency between the two reviewers, with no significant difference in their assessments.Conclusion: This study evaluated the performance of ChatGPT 3.5 and ChatGPT 4 in answering questions related to endodontics and operative dentistry. The results showed no statistically significant differences between the two versions, indicating comparable response accuracy. The consistency between reviewers further validated the reliability of the assessment process.