GraB: Visual Saliency via Novel Graph Model and Background Priors

Qiaosong Wang, Wen Zheng, Robinson Piramuthu; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 535-543

Abstract


We propose an unsupervised bottom-up saliency detection approach by exploiting novel graph structure and background priors. The input image is represented as an undirected graph with superpixels as nodes. Feature vectors are extracted from each node to cover regional color, contrast and texture information. A novel graph model is proposed to effectively capture local and global saliency cues. To obtain more accurate saliency estimations, we optimize the saliency map by using a robust background measure. Comprehensive evaluations on benchmark datasets indicate that our algorithm universally surpasses state-of-the-art unsupervised solutions and performs favorably against supervised approaches.

Related Material


[pdf]
[bibtex]
@InProceedings{Wang_2016_CVPR,
author = {Wang, Qiaosong and Zheng, Wen and Piramuthu, Robinson},
title = {GraB: Visual Saliency via Novel Graph Model and Background Priors},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}
}