Locally Aligned Feature Transforms across Views

Wei Li, Xiaogang Wang; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 3594-3601

Abstract


In this paper, we propose a new approach for matching images observed in different camera views with complex cross-view transforms and apply it to person reidentification. It jointly partitions the image spaces of two camera views into different configurations according to the similarity of cross-view transforms. The visual features of an image pair from different views are first locally aligned by being projected to a common feature space and then matched with softly assigned metrics which are locally optimized. The features optimal for recognizing identities are different from those for clustering cross-view transforms. They are jointly learned by utilizing sparsityinducing norm and information theoretical regularization. This approach can be generalized to the settings where test images are from new camera views, not the same as those in the training set. Extensive experiments are conducted on public datasets and our own dataset. Comparisons with the state-of-the-art metric learning and person re-identification methods show the superior performance of our approach.

Related Material


[pdf]
[bibtex]
@InProceedings{Li_2013_CVPR,
author = {Li, Wei and Wang, Xiaogang},
title = {Locally Aligned Feature Transforms across Views},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2013}
}