Understanding Image Representations by Measuring Their Equivariance and Equivalence

Karel Lenc, Andrea Vedaldi; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 991-999

Abstract


Despite the importance of image representations such as histograms of oriented gradients and deep Convolutional Neural Networks (CNN), our theoretical understanding of them remains limited. Aiming at filling this gap, we investigate three key mathematical properties of representations: equivariance, invariance, and equivalence. Equivariance studies how transformations of the input image are encoded by the representation, invariance being a special case where a transformation has no effect. Equivalence studies whether two representations, for example two different parametrisations of a CNN, capture the same visual information or not. A number of methods to establish these properties empirically are proposed, including introducing transformation and stitching layers in CNNs. These methods are then applied to popular representations to reveal insightful aspects of their structure, including clarifying at which layers in a CNN certain geometric invariances are achieved. While the focus of the paper is theoretical, direct applications to structured-output regression are demonstrated too.

Related Material


[pdf]
[bibtex]
@InProceedings{Lenc_2015_CVPR,
author = {Lenc, Karel and Vedaldi, Andrea},
title = {Understanding Image Representations by Measuring Their Equivariance and Equivalence},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2015}
}