Learning With Dataset Bias in Latent Subcategory Models

Dimitris Stamos, Samuele Martelli, Moin Nabi, Andrew McDonald, Vittorio Murino, Massimiliano Pontil; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 3650-3658

Abstract


Latent subcategory models (LSMs) offer significant improvements over training flat classifiers such as linear SVMs. Training LSMs is a challenging task due to the potentially large number of local optima in the objective function and the increased model complexity which requires large training set sizes. Often larger datasets are available as a collection of heterogeneous datasets. However, previous work has highlighted the possible danger of simply training a model from the combined datasets, due to the presence of bias. In this paper, we present a model which jointly learns an LSM for each dataset as well as a compound LSM. The method provides a means to borrow statistical strength from the datasets while reducing their inherent bias. In experiments we demonstrate that the compound LSM, when tested on Pascal, LabelMe, Caltech101 and SUN in a leave-one-dataset-out fashion, achieves an average improvement of over 6.5% over a previous SVM-based undoing bias approach and an average improvement of over 8.6% over a standard LSM trained on the concatenation of the datasets. Hence our method provides the best of both worlds.

Related Material


[pdf]
[bibtex]
@InProceedings{Stamos_2015_CVPR,
author = {Stamos, Dimitris and Martelli, Samuele and Nabi, Moin and McDonald, Andrew and Murino, Vittorio and Pontil, Massimiliano},
title = {Learning With Dataset Bias in Latent Subcategory Models},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2015}
}