Full text: Download
In response to the growing inspection demand exerted by process automation in component manufacturing, non-destructive testing (NDT) continues to explore automated approaches that utilize deep-learning algorithms for defect identification, including within digital X-ray radiography images. This necessitates a thorough understanding of the implication of image quality parameters on the performance of these deep-learning models. This study investigated the influence of two image-quality parameters, namely signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR), on the performance of a U-net deep-learning semantic segmentation model. Input images were acquired with varying combinations of exposure factors, such as kilovoltage, milli-ampere, and exposure time, which altered the resultant radiographic image quality. The data were sorted into five different datasets according to their measured SNR and CNR values. The deep-learning model was trained five distinct times, utilizing a unique dataset for each training session. Training the model with high CNR values yielded an intersection-over-union (IoU) metric of 0.9594 on test data of the same category but dropped to 0.5875 when tested on lower CNR test data. The result of this study emphasizes the importance of achieving a balance in training dataset according to the investigated quality parameters in order to enhance the performance of deep-learning segmentation models for NDT digital X-ray radiography applications.