Graph-Laplacian PCA: Closed-Form Solution and Robustness

Bo Jiang, Chris Ding, Bio Luo, Jin Tang; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 3492-3498

Abstract


Principal Component Analysis (PCA) is a widely used to learn a low-dimensional representation. In many applications, both vector data X and graph data W are available. Laplacian embedding is widely used for embedding graph data. We propose a graph-Laplacian PCA (gLPCA) to learn a low dimensional representation of X that incorporates graph structures encoded in W . This model has several advantages: (1) It is a data representation model. (2) It has a compact closed-form solution and can be efficiently computed. (3) It is capable to remove corruptions. Extensive experiments on 8 datasets show promising results on image reconstruction and significant improvement on clustering and classification.

Related Material


[pdf]
[bibtex]
@InProceedings{Jiang_2013_CVPR,
author = {Jiang, Bo and Ding, Chris and Luo, Bio and Tang, Jin},
title = {Graph-Laplacian PCA: Closed-Form Solution and Robustness},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2013}
}