Published in

Springer, Lecture Notes in Computer Science, p. 478-486, 2016

DOI: 10.1007/978-3-319-46723-8_55

Links

Tools

Export citation

Search in Google Scholar

Deep learning for multi-task medical image segmentation in multiple modalities

This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Red circle
Preprint: archiving forbidden
Orange circle
Postprint: archiving restricted
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Automatic segmentation of medical images is an important task for many clinical applications. In practice,a wide range of anatomical structures are visualised using different imaging modalities. In this paper,we investigate whether a single convolutional neural network (CNN) can be trained to perform different segmentation tasks. A single CNN is trained to segment six tissues in MR brain images,the pectoral muscle in MR breast images,and the coronary arteries in cardiac CTA. The CNN therefore learns to identify the imaging modality,the visualised anatomical structures,and the tissue classes. For each of the three tasks (brain MRI,breast MRI and cardiac CTA),this combined training procedure resulted in a segmentation performance equivalent to that of a CNN trained specifically for that task,demonstrating the high capacity of CNN architectures. Hence,a single system could be used in clinical practice to automatically perform diverse segmentation tasks without task-specific training.