Tracking via Robust Multi-task Multi-view Joint Sparse Representation

Zhibin Hong, Xue Mei, Danil Prokhorov, Dacheng Tao; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 649-656

Abstract


Combining multiple observation views has proven beneficial for tracking. In this paper, we cast tracking as a novel multi-task multi-view sparse learning problem and exploit the cues from multiple views including various types of visual features, such as intensity, color, and edge, where each feature observation can be sparsely represented by a linear combination of atoms from an adaptive feature dictionary. The proposed method is integrated in a particle filter framework where every view in each particle is regarded as an individual task. We jointly consider the underlying relationship between tasks across different views and different particles, and tackle it in a unified robust multi-task formulation. In addition, to capture the frequently emerging outlier tasks, we decompose the representation matrix to two collaborative components which enable a more robust and accurate approximation. We show that the proposed formulation can be efficiently solved using the Accelerated Proximal Gradient method with a small number of closed-form updates. The presented tracker is implemented using four types of features and is tested on numerous benchmark video sequences. Both the qualitative and quantitative results demonstrate the superior performance of the proposed approach compared to several stateof-the-art trackers.

Related Material


[pdf]
[bibtex]
@InProceedings{Hong_2013_ICCV,
author = {Hong, Zhibin and Mei, Xue and Prokhorov, Danil and Tao, Dacheng},
title = {Tracking via Robust Multi-task Multi-view Joint Sparse Representation},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}
}