Deformable Generator Networks: Unsupervised Disentanglement of Appearance and Geometry

Xianglei Xing, Ruiqi Gao, Tian Han, Song Chun Zhu, Ying Nian Wu

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

We present a deformable generator model to disentangle the appearance and geometric information for both image and video data in a purely unsupervised manner. The appearance generator network models the information related to appearance, including color, illumination, identity or category, while the geometric generator performs geometric warping, such as rotation and stretching, through generating deformation field which is used to warp the generated appearance to obtain the final image or video sequences. Two generators take independent latent vectors as input to disentangle the appearance and geometric information from image or video sequences. For video data, a nonlinear transition model is introduced to both the appearance and geometric generators to capture the dynamics over time. The proposed scheme is general and can be easily integrated into different generative models. An extensive set of qualitative and quantitative experiments shows that the appearance and geometric information can be well disentangled, and the learned geometric generator can be conveniently transferred to other image datasets that share similar structure regularity to facilitate knowledge transfer tasks.

Original languageEnglish
Pages (from-to)1162-1179
Number of pages18
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume44
Issue number3
DOIs
StatePublished - 1 Mar 2022

Keywords

  • Unsupervised learning
  • deep generative model
  • deformable model

Fingerprint

Dive into the research topics of 'Deformable Generator Networks: Unsupervised Disentanglement of Appearance and Geometry'. Together they form a unique fingerprint.

Cite this