Multimedia event detection with multimodal feature fusion and temporal concept localization

Sangmin Oh, Scott McCloskey, Ilseo Kim, Arash Vahdat, Kevin J. Cannons, Hossein Hajimirsadeghi, Greg Mori, A. G.Amitha Perera, Megha Pandey, Jason J. Corso

Research output: Contribution to journalArticlepeer-review

42 Scopus citations

Abstract

We present a system for multimedia event detection. The developed system characterizes complex multimedia events based on a large array of multimodal features, and classifies unseen videos by effectively fusing diverse responses. We present three major technical innovations. First, we explore novel visual and audio features across multiple semantic granularities, including building, often in an unsupervised manner, mid-level and high-level features upon low-level features to enable semantic understanding. Second, we show a novel Latent SVM model which learns and localizes discriminative high-level concepts in cluttered video sequences. In addition to improving detection accuracy beyond existing approaches, it enables a unique summary for every retrieval by its use of high-level concepts and temporal evidence localization. The resulting summary provides some transparency into why the system classified the video as it did. Finally, we present novel fusion learning algorithms and our methodology to improve fusion learning under limited training data condition. Thorough evaluation on a large TRECVID MED 2011 dataset showcases the benefits of the presented system.

Original languageEnglish
Pages (from-to)49-69
Number of pages21
JournalMachine Vision and Applications
Volume25
Issue number1
DOIs
StatePublished - Jan 2014

Keywords

  • Classification
  • Fusion
  • Machine learning
  • Multimedia

Fingerprint

Dive into the research topics of 'Multimedia event detection with multimodal feature fusion and temporal concept localization'. Together they form a unique fingerprint.

Cite this