Story-Driven Summarization for Egocentric Video

Zheng Lu, Kristen Grauman; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 2714-2721

Abstract


We present a video summarization approach that discovers the story of an egocentric video. Given a long input video, our method selects a short chain of video subshots depicting the essential events. Inspired by work in text analysis that links news articles over time, we define a randomwalk based metric of influence between subshots that reflects how visual objects contribute to the progression of events. Using this influence metric, we define an objective for the optimal k-subshot summary. Whereas traditional methods optimize a summary's diversity or representativeness, ours explicitly accounts for how one sub-event "leads to" another--which, critically, captures event connectivity beyond simple object co-occurrence. As a result, our summaries provide a better sense of story. We apply our approach to over 12 hours of daily activity video taken from 23 unique camera wearers, and systematically evaluate its quality compared to multiple baselines with 34 human subjects.

Related Material


[pdf]
[bibtex]
@InProceedings{Lu_2013_CVPR,
author = {Lu, Zheng and Grauman, Kristen},
title = {Story-Driven Summarization for Egocentric Video},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2013}
}