Analyzing Classifiers: Fisher Vectors and Deep Neural Networks

Sebastian Lapuschkin, Alexander Binder, Gregoire Montavon, Klaus-Robert Muller, Wojciech Samek; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2912-2920

Abstract


Fisher vector (FV) classifiers and Deep Neural Networks (DNNs) are popular and successful algorithms for solving image classification problems. However, both are generally considered `black box' predictors as the non-linear transformations involved have so far prevented transparent and interpretable reasoning. Recently, a principled technique, Layer-wise Relevance Propagation (LRP), has been developed in order to better comprehend the inherent structured reasoning of complex nonlinear classification models such as Bag of Feature models or DNNs. In this paper we (1) extend the LRP framework also for Fisher vector classifiers and then use it as analysis tool to (2) quantify the importance of context for classification, (3) qualitatively compare DNNs against FV classifiers in terms of important image regions and (4) detect potential flaws and biases in data. All experiments are performed on the PASCAL VOC 2007 and ILSVRC 2012 data sets.

Related Material


[pdf]
[bibtex]
@InProceedings{Lapuschkin_2016_CVPR,
author = {Lapuschkin, Sebastian and Binder, Alexander and Montavon, Gregoire and Muller, Klaus-Robert and Samek, Wojciech},
title = {Analyzing Classifiers: Fisher Vectors and Deep Neural Networks},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}
}