Published in

Institute of Electrical and Electronics Engineers, IEEE Transactions on Medical Imaging, 2(36), p. 674-683, 2017

DOI: 10.1109/tmi.2016.2621185

Links

Tools

Export citation

Search in Google Scholar

DeepCut: Object Segmentation from Bounding Box Annotations using Convolutional Neural Networks

This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

In this paper, we propose DeepCut , a method to obtain pixelwise object segmentations given an image dataset labelled weak annotations, in our case bounding boxes. It extends the approach of the well-known GrabCut [1] method to include machine learning by training a neural network classifier from bounding box annotations. We formulate the problem as an en- ergy minimisation problem over a densely-connected conditional random field and iteratively update the training targets to obtain pixelwise object segmentations. Additionally, we propose variants of the DeepCut method and compare those to a na ?? ??ve approach to CNN training under weak supervision. We test its applicability to solve brain and lung segmentation problems on a challenging fetal magnetic resonance dataset and obtain encouraging results in terms of accuracy.