Learning Compositional Sparse Bimodal Models

Suren Kumar, Vikas Dhiman, Parker A. Koch, Jason J. Corso

Research output: Contribution to journalArticlepeer-review

Abstract

Various perceptual domains have underlying compositional semantics that are rarely captured in current models. We suspect this is because directly learning the compositional structure has evaded these models. Yet, the compositional structure of a given domain can be grounded in a separate domain thereby simplifying its learning. To that end, we propose a new approach to modeling bimodal perceptual domains that explicitly relates distinct projections across each modality and then jointly learns a bimodal sparse representation. The resulting model enables compositionality across these distinct projections and hence can generalize to unobserved percepts spanned by this compositional basis. For example, our model can be trained on red triangles and blue squares; yet, implicitly will also have learned red squares and blue triangles. The structure of the projections and hence the compositional basis is learned automatically; no assumption is made on the ordering of the compositional elements in either modality. Although our modeling paradigm is general, we explicitly focus on a tabletop building-blocks setting. To test our model, we have acquired a new bimodal dataset comprising images and spoken utterances of colored shapes (blocks) in the tabletop setting. Our experiments demonstrate the benefits of explicitly leveraging compositionality in both quantitative and human evaluation studies.

Original languageEnglish
Pages (from-to)1032-1044
Number of pages13
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume40
Issue number5
DOIs
StatePublished - 1 May 2018

Keywords

  • Multimodal learning
  • artificial intelligence
  • compositional learning
  • human-robot interaction
  • symbol grounding
  • tabletop robotics

Fingerprint

Dive into the research topics of 'Learning Compositional Sparse Bimodal Models'. Together they form a unique fingerprint.

Cite this