Published in

Springer Verlag, Lecture Notes in Computer Science, p. 210-222

DOI: 10.1007/978-3-319-59050-9_17

Links

Tools

Export citation

Search in Google Scholar

Weakly-supervised evidence pinpointing and description

Book chapter published in 2017 by Qiang Zhang, Abhir Bhalerao, Charles Hutchinson
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

We propose a learning method to identify which specific regions and features of images contribute to a certain classification. In the medical imaging context, they can be the evidence regions where the abnormalities are most likely to appear, and the discriminative features of these regions supporting the pathology classification. The learning is weakly-supervised requiring only the pathological labels and no other prior knowledge. The method can also be applied to learn the salient description of an anatomy discriminative from its background, in order to localise the anatomy before a classification step. We formulate evidence pinpointing as a sparse descriptor learning problem. Because of the large computational complexity, the objective function is composed in a stochastic way and is optimised by the Regularised Dual Averaging algorithm. We demonstrate that the learnt feature descriptors contain more specific and better discriminative information than hand-crafted descriptors contributing to superior performance for the tasks of anatomy localisation and pathology classification respectively. We apply our method on the problem of lumbar spinal stenosis for localising and classifying vertebrae in MRI images. Experimental results show that our method when trained with only target labels achieves better or competitive performance on both tasks compared with strongly-supervised methods requiring labels and multiple landmarks. A further improvement is achieved with training on additional weakly annotated data, which gives robust localisation with average error within 2 mm and classification accuracies close to human performance.