DeepCut: Object Segmentation from Bounding Box Annotations Using Convolutional Neural Networks

Rajchl M, Lee MCH, Oktay O, Kamnitsas K, Passerat-Palmbach J, Bai W, Damodaram M, Rutherford MA, Hajnal JV, Kainz B, Rueckert D (2017)


Publication Type: Journal article

Publication year: 2017

Journal

Book Volume: 36

Pages Range: 674-683

Article Number: 7739993

Journal Issue: 2

DOI: 10.1109/TMI.2016.2621185

Abstract

In this paper, we propose DeepCut, a method to obtain pixelwise object segmentations given an image dataset labelled weak annotations, in our case bounding boxes. It extends the approach of the well-known GrabCut [1] method to include machine learning by training a neural network classifier from bounding box annotations. We formulate the problem as an energy minimisation problem over a densely-connected conditional random field and iteratively update the training targets to obtain pixelwise object segmentations. Additionally, we propose variants of the DeepCut method and compare those to a naïve approach to CNN training under weak supervision. We test its applicability to solve brain and lung segmentation problems on a challenging fetal magnetic resonance dataset and obtain encouraging results in terms of accuracy.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Rajchl, M., Lee, M.C.H., Oktay, O., Kamnitsas, K., Passerat-Palmbach, J., Bai, W.,... Rueckert, D. (2017). DeepCut: Object Segmentation from Bounding Box Annotations Using Convolutional Neural Networks. IEEE Transactions on Medical Imaging, 36(2), 674-683. https://doi.org/10.1109/TMI.2016.2621185

MLA:

Rajchl, Martin, et al. "DeepCut: Object Segmentation from Bounding Box Annotations Using Convolutional Neural Networks." IEEE Transactions on Medical Imaging 36.2 (2017): 674-683.

BibTeX: Download