Transformation-Invariant Convolutional Jungles

Dmitry Laptev, Joachim M. Buhmann; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 3043-3051

Abstract


Many Computer Vision problems arise from information processing of data sources with nuisance variances like scale, orientation, contrast, perspective foreshortening or - in medical imaging - staining and local warping. In most cases these variances can be stated a priori and can be used to improve the generalization of recognition algorithms. We propose a novel supervised feature learning approach, which efficiently extracts information from these constraints to produce interpretable, transformation-invariant features. The proposed method can incorporate a large class of transformations, e.g., shifts, rotations, change of scale, morphological operations, non-linear distortions, photometric transformations, etc. These features boost the discrimination power of a novel image classification and segmentation method, which we call Transformation-Invariant Convolutional Jungles (TICJ). We test the algorithm on two benchmarks in face recognition and medical imaging, where it achieves state of the art results, while being computationally significantly more efficient than Deep Neural Networks.

Related Material


[pdf]
[bibtex]
@InProceedings{Laptev_2015_CVPR,
author = {Laptev, Dmitry and Buhmann, Joachim M.},
title = {Transformation-Invariant Convolutional Jungles},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2015}
}