On the Relationship Between Visual Attributes and Convolutional Networks

Victor Escorcia, Juan Carlos Niebles, Bernard Ghanem; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1256-1264

Abstract


One of the cornerstone principles of deep models is their abstraction capacity, i.e. their ability to learn abstract concepts from `simpler' ones. Through extensive experiments, we characterize the nature of the relationship between abstract concepts (specifically objects in images) learned by popular and high performing convolutional networks (conv-nets) and established mid-level representations used in computer vision (specifically semantic visual attributes). We focus on attributes due to their impact on several applications, such as object description, retrieval and mining, and active (and zero-shot) learning. Among the findings we uncover, we show empirical evidence of the existence of Attribute Centric Nodes (ACNs) within a conv-net, which is trained to recognize objects (not attributes) in images. These special conv-net nodes (1) collectively encode information pertinent to visual attribute representation and discrimination, (2) are unevenly and sparsely distribution across all layers of the conv-net, and (3) play an important role in conv-net based object recognition.

Related Material


[pdf]
[bibtex]
@InProceedings{Escorcia_2015_CVPR,
author = {Escorcia, Victor and Carlos Niebles, Juan and Ghanem, Bernard},
title = {On the Relationship Between Visual Attributes and Convolutional Networks},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2015}
}