Translating Video Content to Natural Language Descriptions

Marcus Rohrbach, Wei Qiu, Ivan Titov, Stefan Thater, Manfred Pinkal, Bernt Schiele; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 433-440

Abstract


Humans use rich natural language to describe and communicate visual perceptions. In order to provide natural language descriptions for visual content, this paper combines two important ingredients. First, we generate a rich semantic representation of the visual content including e.g. object and activity labels. To predict the semantic representation we learn a CRF to model the relationships between different components of the visual input. And second, we propose to formulate the generation of natural language as a machine translation problem using the semantic representation as source language and the generated sentences as target language. For this we exploit the power of a parallel corpus of videos and textual descriptions and adapt statistical machine translation to translate between our two languages. We evaluate our video descriptions on the TACoS dataset [23], which contains video snippets aligned with sentence descriptions. Using automatic evaluation and human judgments we show significant improvements over several baseline approaches, motivated by prior work. Our translation approach also shows improvements over related work on an image description task.

Related Material


[pdf]
[bibtex]
@InProceedings{Rohrbach_2013_ICCV,
author = {Rohrbach, Marcus and Qiu, Wei and Titov, Ivan and Thater, Stefan and Pinkal, Manfred and Schiele, Bernt},
title = {Translating Video Content to Natural Language Descriptions},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}
}