Dancelets Mining for Video Recommendation Based on Dance Styles

Tingting Han, Hongxun Yao, Chenliang Xu, Xiaoshuai Sun, Yanhao Zhang, Jason J. Corso

Research output: Contribution to journalArticlepeer-review

29 Scopus citations

Abstract

Dance is a unique and meaningful type of human expression, composed of abundant and various action elements. However, existing methods based on associated texts and spatial visual features have difficulty capturing the highly articulated motion patterns. To overcome this limitation, we propose to take advantage of the intrinsic motion information in dance videos to solve the video recommendation problem. We present a novel system that recommends dance videos based on a mid-level action representation, termed Dancelets. The Dancelets are used to bridge the semantic gap between video content and high-level concept, dance style, which plays a significant role in characterizing different types of dances. The proposed method executes automatic mining of dancelets with a concatenation of normalized cut clustering and linear discriminant analysis. This ensures that the discovered dancelets are both representative and discriminative. Additionally, to exploit the motion cues in videos, we employ motion boundaries as saliency priors to generate volumes of interest and extract C3D features to capture spatiotemporal information from the mid-level patches. Extensive experiments validated on our proposed large dance dataset, HIT Dances dataset, demonstrate the effectiveness of the proposed methods for dance style-based video recommendation.

Original languageEnglish
Article number7752919
Pages (from-to)712-724
Number of pages13
JournalIEEE Transactions on Multimedia
Volume19
Issue number4
DOIs
StatePublished - Apr 2017

Keywords

  • Dancelets
  • LDA detector
  • dance style
  • normalized cuts
  • spatiotemporal features
  • video recommendation

Fingerprint

Dive into the research topics of 'Dancelets Mining for Video Recommendation Based on Dance Styles'. Together they form a unique fingerprint.

Cite this