Dynamically Encoded Actions Based on Spacetime Saliency

Christoph Feichtenhofer, Axel Pinz, Richard P. Wildes; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 2755-2764

Abstract


Human actions typically occur over a well localized extent in both space and time. Similarly, as typically captured in video, human actions have small spatiotemporal support in image space. This paper capitalizes on these observations by weighting feature pooling for action recognition over those areas within a video where actions are most likely to occur. To enable this operation, we define a novel measure of spacetime saliency. The measure relies on two observations regarding foreground motion of human actors: They typically exhibit motion that contrasts with that of their surrounding region and they are spatially compact. By using the resulting definition of saliency during feature pooling we show that action recognition performance achieves state-of-the-art levels on three widely considered action recognition datasets. Our saliency weighted pooling can be applied to essentially any locally defined features and encodings thereof. Additionally, we demonstrate that inclusion of locally aggregated spatiotemporal energy features, which efficiently result as a by-product of the saliency computation, further boosts performance over reliance on standard action recognition features alone.

Related Material


[pdf]
[bibtex]
@InProceedings{Feichtenhofer_2015_CVPR,
author = {Feichtenhofer, Christoph and Pinz, Axel and Wildes, Richard P.},
title = {Dynamically Encoded Actions Based on Spacetime Saliency},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2015}
}