Learning Image Representations Tied to Ego-Motion

Dinesh Jayaraman, Kristen Grauman; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1413-1421

Abstract


Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose to exploit proprioceptive motor signals to provide unsupervised regularization in convolutional neural networks to learn visual representations from egocentric video. Specifically, we enforce that our learned features exhibit equivariance, i.e, they respond predictably to transformations associated with distinct ego-motions. With three datasets, we show that our unsupervised feature learning approach significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in static images from a disjoint domain.

Related Material


[pdf]
[bibtex]
@InProceedings{Jayaraman_2015_ICCV,
author = {Jayaraman, Dinesh and Grauman, Kristen},
title = {Learning Image Representations Tied to Ego-Motion},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}