Learning Structured Hough Voting for Joint Object Detection and Occlusion Reasoning

Tao Wang, Xuming He, Nick Barnes; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 1790-1797

Abstract


We propose a structured Hough voting method for detecting objects with heavy occlusion in indoor environments. First, we extend the Hough hypothesis space to include both object location and its visibility pattern, and design a new score function that accumulates votes for object detection and occlusion prediction. In addition, we explore the correlation between objects and their environment, building a depth-encoded object-context model based on RGB-D data. Particularly, we design a layered context representation and allow image patches from both objects and backgrounds voting for the object hypotheses. We demonstrate that using a data-driven 2.1D representation we can learn visual codebooks with better quality, and more interpretable detection results in terms of spatial relationship between objects and viewer. We test our algorithm on two challenging RGB-D datasets with significant occlusion and intraclass variation, and demonstrate the superior performance of our method.

Related Material


[pdf]
[bibtex]
@InProceedings{Wang_2013_CVPR,
author = {Wang, Tao and He, Xuming and Barnes, Nick},
title = {Learning Structured Hough Voting for Joint Object Detection and Occlusion Reasoning},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2013}
}