Published in

Springer, Signal, Image and Video Processing, 4(18), p. 3353-3360, 2024

DOI: 10.1007/s11760-024-02998-5

Links

Tools

Export citation

Search in Google Scholar

Comparison of simple augmentation transformations for a convolutional neural network classifying medical images

Journal article published in 2024 by Oona Rainio ORCID, Riku Klén ORCID
This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Orange circle
Postprint: archiving restricted
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

AbstractSimple image augmentation techniques, such as reflection, rotation, or translation, might work differently for medical images than they do for regular photographs due to the fundamental properties of medical imaging techniques and the bilateral symmetry of the human body. Here, we compare the predictions of a convolutional neural network (CNN) trained for binary classification by using either no augmentation or one of seven usual types augmentation. We have 11 different medical data sets, mostly related to lung infections or cancer, with X-rays, ultrasound (US) images, and images from positron emission tomography (PET) and magnetic resonance imaging (MRI). According to our results, the augmentation types do not produce statistically significant differences for US and PET data sets, but, for X-rays and MRI images, the best augmentation technique is adding Gaussian blur to images.