What do 15,000 Object Categories Tell Us About Classifying and Localizing Actions?

Mihir Jain, Jan C. van Gemert, Cees G. M. Snoek; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 46-55

Abstract


This paper contributes to automatic classification and localization of human actions in video. Whereas motion is the key ingredient in modern approaches, we assess the benefits of having objects in the video representation. Rather than considering a handful of carefully selected and localized objects, we conduct an empirical study on the benefit of encoding 15,000 object categories for action using 6 datasets totaling more than 200 hours of video and covering 180 action classes. Our key contributions are i) the first in-depth study of encoding objects for actions, ii) we show that objects matter for actions, and are often semantically relevant as well. iii) We establish that actions have object preferences. Rather than using all objects, selection is advantageous for action recognition. iv) We reveal that object-action relations are generic, which allows to transferring these relationships from the one domain to the other. And, v) objects, when combined with motion, improve the state-of-the-art for both action classification and localization.

Related Material


[pdf]
[bibtex]
@InProceedings{Jain_2015_CVPR,
author = {Jain, Mihir and van Gemert, Jan C. and Snoek, Cees G. M.},
title = {What do 15,000 Object Categories Tell Us About Classifying and Localizing Actions?},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2015}
}