Simpler Non-Parametric Methods Provide as Good or Better Results to Multiple-Instance Learning

Ragav Venkatesan, Parag Chandakkar, Baoxin Li; The IEEE International Conference on Computer Vision (ICCV), 2015, pp. 2605-2613


Multiple-instance learning (MIL) is a unique learning problem in which training data labels are available only for collections of objects (called bags) instead of individual objects (called instances). A plethora of approaches have been developed to solve this problem in the past years. Popular methods include the diverse density, MILIS and DD-SVM. While having been widely used, these methods, particularly those in computer vision have attempted fairly sophisticated solutions to solve certain unique and particular configurations of the MIL space. In this paper, we analyze the MIL feature space using modified versions of traditional non-parametric techniques like the Parzen window and k-nearest-neighbour, and develop a learning approach employing distances to k-nearest neighbours of a point in the feature space. We show that these methods work as well, if not better than most recently published methods on benchmark datasets. We compare and contrast our analysis with the well-established diverse-density approach and its variants in recent literature, using benchmark datasets including the Musk, Andrews' and Corel datasets, along with a diabetic retinopathy pathology diagnosis dataset. Experimental results demonstrate that, while enjoying an intuitive interpretation and supporting fast learning, these method have the potential of delivering improved performance even for complex data arising from real-world applications.

Related Material

author = {Venkatesan, Ragav and Chandakkar, Parag and Li, Baoxin},
title = {Simpler Non-Parametric Methods Provide as Good or Better Results to Multiple-Instance Learning},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}