Learning Multi-view Generator Network for Shared Representation

Tian Han, Xianglei Xing, Ying Nian Wu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

6 Scopus citations

Abstract

Multi-view representation learning is challenging because different views contain both the common structure and the complex view specific information. The traditional generative models may not be effective in such situation, since view-specific and common information cannot be well separated, which may cause problems for downstream vision tasks. In this paper, we introduce a multi-view generator model to solve the problem of multi-view generation and recognition in a unified framework. We propose a multi-view alternating back-propagation algorithm to learn multi-view generator networks by allowing them to share common latent factors. Our experiments show that the proposed method is effective for both image generation and recognition. Specifically, we first qualitatively demonstrate that our model can rotate and complete faces accurately. Then we show that our model can achieve state-of-art or competitive recognition performances through quantitative comparisons.

Original languageEnglish
Title of host publication2018 24th International Conference on Pattern Recognition, ICPR 2018
Pages2062-2068
Number of pages7
ISBN (Electronic)9781538637883
DOIs
StatePublished - 26 Nov 2018
Event24th International Conference on Pattern Recognition, ICPR 2018 - Beijing, China
Duration: 20 Aug 201824 Aug 2018

Publication series

NameProceedings - International Conference on Pattern Recognition
Volume2018-August
ISSN (Print)1051-4651

Conference

Conference24th International Conference on Pattern Recognition, ICPR 2018
Country/TerritoryChina
CityBeijing
Period20/08/1824/08/18

Keywords

  • Gait recognition
  • Generator networks
  • Multi-view learning

Fingerprint

Dive into the research topics of 'Learning Multi-view Generator Network for Shared Representation'. Together they form a unique fingerprint.

Cite this