Learning Deep Object Detectors From 3D Models

Xingchao Peng, Baochen Sun, Karim Ali, Kate Saenko; The IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1278-1286


Crowdsourced 3D CAD models are easily accessible online, and can potentially generate an infinite number of training images for almost any object category. We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the benchmark PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark.

Related Material

author = {Peng, Xingchao and Sun, Baochen and Ali, Karim and Saenko, Kate},
title = {Learning Deep Object Detectors From 3D Models},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}