Matching-space Stereo Networks for Cross-domain Generalization

Changjiang Cai, Matteo Poggi, Stefano Mattoccia, Philippos Mordohai

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

34 Scopus citations

Abstract

End-to-end deep networks represent the state of the art for stereo matching. While excelling on images framing environments similar to the training set, major drops in accuracy occur in unseen domains (e.g., when moving from synthetic to real scenes). In this paper we introduce a novel family of architectures, namely Matching-Space Networks (MS-Nets), with improved generalization properties. By replacing learning-based feature extraction from image RGB values with matching functions and confidence measures from conventional wisdom, we move the learning process from the color space to the Matching Space, avoiding over-specialization to domain specific features. Extensive experimental results on four real datasets highlight that our proposal leads to superior generalization to unseen environments over conventional deep architectures, keeping accuracy on the source domain almost unaltered. Our code is available at https://qithub.com/ccj5351/MS-Nets.

Original languageEnglish
Title of host publicationProceedings - 2020 International Conference on 3D Vision, 3DV 2020
Pages364-373
Number of pages10
ISBN (Electronic)9781728181288
DOIs
StatePublished - Nov 2020
Event8th International Conference on 3D Vision, 3DV 2020 - Virtual, Fukuoka, Japan
Duration: 25 Nov 202028 Nov 2020

Publication series

NameProceedings - 2020 International Conference on 3D Vision, 3DV 2020

Conference

Conference8th International Conference on 3D Vision, 3DV 2020
Country/TerritoryJapan
CityVirtual, Fukuoka
Period25/11/2028/11/20

Fingerprint

Dive into the research topics of 'Matching-space Stereo Networks for Cross-domain Generalization'. Together they form a unique fingerprint.

Cite this