Active Learning of an Action Detector from Untrimmed Videos

Sunil Bandla, Kristen Grauman; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 1833-1840

Abstract


Collecting and annotating videos of realistic human actions is tedious, yet critical for training action recognition systems. We propose a method to actively request the most useful video annotations among a large set of unlabeled videos. Predicting the utility of annotating unlabeled video is not trivial, since any given clip may contain multiple actions of interest, and it need not be trimmed to temporal regions of interest. To deal with this problem, we propose a detection-based active learner to train action category models. We develop a voting-based framework to localize likely intervals of interest in an unlabeled clip, and use them to estimate the total reduction in uncertainty that annotating that clip would yield. On three datasets, we show our approach can learn accurate action detectors more efficiently than alternative active learning strategies that fail to accommodate the "untrimmed" nature of real video data.

Related Material


[pdf]
[bibtex]
@InProceedings{Bandla_2013_ICCV,
author = {Bandla, Sunil and Grauman, Kristen},
title = {Active Learning of an Action Detector from Untrimmed Videos},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}
}