Bottom-Up Segmentation for Top-Down Detection

Sanja Fidler, Roozbeh Mottaghi, Alan Yuille, Raquel Urtasun; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 3294-3301

Abstract


In this paper we are interested in how semantic segmentation can help object detection. Towards this goal, we propose a novel deformable part-based model which exploits region-based segmentation algorithms that compute candidate object regions by bottom-up clustering followed by ranking of those regions. Our approach allows every detection hypothesis to select a segment (including void), and scores each box in the image using both the traditional HOG filters as well as a set of novel segmentation features. Thus our model "blends" between the detector and segmentation models. Since our features can be computed very efficiently given the segments, we maintain the same complexity as the original DPM [14]. We demonstrate the effectiveness of our approach in PASCAL VOC 2010, and show that when employing only a root filter our approach outperforms Dalal & Triggs detector [12] on all classes, achieving 13% higher average AP. When employing the parts, we outperform the original DPM [14] in 19 out of 20 classes, achieving an improvement of 8% AP. Furthermore, we outperform the previous state-of-the-art on VOC'10 test by 4%.

Related Material


[pdf]
[bibtex]
@InProceedings{Fidler_2013_CVPR,
author = {Fidler, Sanja and Mottaghi, Roozbeh and Yuille, Alan and Urtasun, Raquel},
title = {Bottom-Up Segmentation for Top-Down Detection},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2013}
}