Label Propagation from ImageNet to 3D Point Clouds

Yan Wang, Rongrong Ji, Shih-Fu Chang; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 3135-3142

Abstract


Recent years have witnessed a growing interest in understanding the semantics of point clouds in a wide variety of applications. However, point cloud labeling remains an open problem, due to the difficulty in acquiring sufficient 3D point labels towards training effective classifiers. In this paper, we overcome this challenge by utilizing the existing massive 2D semantic labeled datasets from decadelong community efforts, such as ImageNet and LabelMe, and a novel "cross-domain" label propagation approach. Our proposed method consists of two major novel components, Exemplar SVM based label propagation, which effectively addresses the cross-domain issue, and a graphical model based contextual refinement incorporating 3D constraints. Most importantly, the entire process does not require any training data from the target scenes, also with good scalability towards large scale applications. We evaluate our approach on the well-known Cornell Point Cloud Dataset, achieving much greater efficiency and comparable accuracy even without any 3D training data. Our approach shows further major gains in accuracy when the training data from the target scenes is used, outperforming state-ofthe-art approaches with far better efficiency.

Related Material


[pdf]
[bibtex]
@InProceedings{Wang_2013_CVPR,
author = {Wang, Yan and Ji, Rongrong and Chang, Shih-Fu},
title = {Label Propagation from ImageNet to 3D Point Clouds},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2013}
}