Learning to Transfer: Transferring Latent Task Structures and Its Application to Person-Specific Facial Action Unit Detection

Timur Almaev, Brais Martinez, Michel Valstar; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 3774-3782

Abstract


In this article we explore the problem of constructing person-specific models for the detection of facial Action Units (AUs), addressing the problem from the point of view of Transfer Learning and Multi-Task Learning. Our starting point is the fact that some expressions, such as smiles, are very easily elicited, annotated, and automatically detected, while others are much harder to elicit and to annotate. We thus consider a novel problem: all AU models for the target subject are to be learnt using person-specific annotated data for a reference AU (AU12 in our case), and no data or little data regarding the target AU. In order to design such a model, we propose a novel Multi-Task Learning and the associated Transfer Learning framework, in which we consider both relations across subjects and AUs. That is to say, we consider a tensor structure among the tasks. Our approach hinges on learning the latent relations among tasks using one single reference AU, and then transferring these latent relations to other AUs. We show that we are able to effectively make use of the annotated data for AU12 when learning other person-specific AU models, even in the absence of data for the target task. Finally, we show the excellent performance of our method when small amounts of annotated data for the target tasks are made available.

Related Material


[pdf]
[bibtex]
@InProceedings{Almaev_2015_ICCV,
author = {Almaev, Timur and Martinez, Brais and Valstar, Michel},
title = {Learning to Transfer: Transferring Latent Task Structures and Its Application to Person-Specific Facial Action Unit Detection},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}