Salient object detection in image sequences via spatial-temporal cue

Chuang Gan, Zengchang Qin, Jia Xu, Tao Wan

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Contemporary video search and categorization are non-trivial tasks due to the massively increasing amount and content variety of videos. We put forward the study of visual saliency models in video. Such a model is employed to identify salient objects from the image background. Starting from the observation that motion information in video often attracts more human attention compared to static images, we devise a region contrast based saliency detection model using spatial-temporal cues (RCST). We introduce and study four saliency principles to realize the RCST. This generalizes the previous static image for saliency computational model to video. We conduct experiments on a publicly available video segmentation database where our method significantly outperforms seven state-of-the-art methods with respect to PR curve, ROC curve and visual comparison.

Original languageEnglish
Title of host publicationIEEE VCIP 2013 - 2013 IEEE International Conference on Visual Communications and Image Processing
DOIs
StatePublished - 2013
Event2013 IEEE International Conference on Visual Communications and Image Processing, IEEE VCIP 2013 - Kuching, Sarawak, Malaysia
Duration: 17 Nov 201320 Nov 2013

Publication series

NameIEEE VCIP 2013 - 2013 IEEE International Conference on Visual Communications and Image Processing

Conference

Conference2013 IEEE International Conference on Visual Communications and Image Processing, IEEE VCIP 2013
Country/TerritoryMalaysia
CityKuching, Sarawak
Period17/11/1320/11/13

Keywords

  • object detection
  • saliency
  • spatial-temporal cue

Fingerprint

Dive into the research topics of 'Salient object detection in image sequences via spatial-temporal cue'. Together they form a unique fingerprint.

Cite this