Latent Task Adaptation with Large-Scale Hierarchies

Yangqing Jia, Trevor Darrell; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 2080-2087

Abstract


Recent years have witnessed the success of large-scale image classification systems that are able to identify objects among thousands of possible labels. However, it is yet unclear how general classifiers such as ones trained on ImageNet can be optimally adapted to specific tasks, each of which only covers a semantically related subset of all the objects in the world. It is inefficient and suboptimal to retrain classifiers whenever a new task is given, and is inapplicable when tasks are not given explicitly, but implicitly specified as a set of image queries. In this paper we propose a novel probabilistic model that jointly identifies the underlying task and performs prediction with a lineartime probabilistic inference algorithm, given a set of query images from a latent task. We present efficient ways to estimate parameters for the model, and an open-source toolbox to train classifiers distributedly at a large scale. Empirical results based on the ImageNet data showed significant performance increase over several baseline algorithms.

Related Material


[pdf]
[bibtex]
@InProceedings{Jia_2013_ICCV,
author = {Jia, Yangqing and Darrell, Trevor},
title = {Latent Task Adaptation with Large-Scale Hierarchies},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}
}