Vehicle Tracking Using Surveillance with Multimodal Data Fusion

Yue Zhang, Bin Song, Xiaojiang Du, Mohsen Guizani

Research output: Contribution to journalArticlepeer-review

57 Scopus citations

Abstract

Vehicle location prediction or vehicle tracking is a significant topic within connected vehicles. This task, however, is difficult if merely a single modal data is available, probably causing biases and impeding the accuracy. With the development of sensor networks in connected vehicles, multimodal data are becoming accessible. Therefore, we propose a framework for vehicle tracking with multimodal data fusion. Specifically, we fuse the results of two modalities, images and velocities, in our vehicle-tracking task. Images, being processed in the module of vehicle detection, provide visual information about the features of vehicles, whereas velocity estimation can further evaluate the possible locations of the target vehicles, which reduces the number of candidates being compared, decreasing the time consumption and computational cost. Our vehicle detection model is designed with a color-faster R-CNN, whose inputs are both the texture and color of the vehicles. Meanwhile, velocity estimation is achieved by the Kalman filter, which is a classical method for tracking. Finally, a multimodal data fusion method is applied to integrate these outcomes so that vehicle-tracking tasks can be achieved. Experimental results suggest the efficiency of our methods, which can track vehicles using a series of surveillance cameras in urban areas.

Original languageEnglish
Pages (from-to)2353-2361
Number of pages9
JournalIEEE Transactions on Intelligent Transportation Systems
Volume19
Issue number7
DOIs
StatePublished - Jul 2018

Keywords

  • Faster R-CNN
  • Kalman filter
  • multimodal data fusion
  • surveillance
  • vehicle tracking

Fingerprint

Dive into the research topics of 'Vehicle Tracking Using Surveillance with Multimodal Data Fusion'. Together they form a unique fingerprint.

Cite this