Learning Deep Intrinsic Video Representation by Exploring Temporal Coherence and Graph Structure

  • Yingwei Pan ,
  • Yehao Li ,
  • Ting Yao ,
  • Tao Mei ,
  • Houqiang Li ,
  • Yong Rui

International Joint Conference on Artificial Intelligence (IJCAI) |

Learning video representation is not a trivial task, as video is an information-intensive media where each frame does not exist  independently. Locally, a video frame is visually and semantically similar with its adjacent frames. Holistically, a video has its inherent structure—the correlations among video frames. For example, even the frames far from each other may also hold similar semantics. Such context information is therefore important to characterize the intrinsic representation of a video frame. In this paper, we present a novel approach to learn the deep video representation by exploring both local and holistic contexts. Specifically, we propose a triplet sampling mechanism to encode the local temporal relationship of adjacent frames based on their deep representations. In addition, we incorporate the graph structure of the video, as a priori, to holistically preserve the inherent correlations among video frames. Our approach is fully unsupervised and trained in an end-to-end deep convolutional neural network architecture. By extensive experiments, we show that our learned representation can significantly boost several video recognition tasks (retrieval, classification, and highlight detection) over traditional video representations.