Click Here: Human-Localized Keypoints as Guidance for Viewpoint Estimation

Ryan Szeto, Jason J. Corso

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

10 Scopus citations

Abstract

We motivate and address a human-in-the-loop variant of the monocular viewpoint estimation task in which the location and class of one semantic object keypoint is available at test time. In order to leverage the keypoint information, we devise a Convolutional Neural Network called Click-Here CNN (CH-CNN) that integrates the keypoint information with activations from the layers that process the image. It transforms the keypoint information into a 2D map that can be used to weigh features from certain parts of the image more heavily. The weighted sum of these spatial features is combined with global image features to provide relevant information to the prediction layers. To train our network, we collect a novel dataset of 3D keypoint annotations on thousands of CAD models, and synthetically render millions of images with 2D keypoint information. On test instances from PASCAL 3D+, our model achieves a mean class accuracy of 90.7%, whereas the state-of-the-art baseline only obtains 85.7% mean class accuracy, justifying our argument for human-in-the-loop inference.

Original languageEnglish
Title of host publicationProceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017
Pages1604-1613
Number of pages10
ISBN (Electronic)9781538610329
DOIs
StatePublished - 22 Dec 2017
Event16th IEEE International Conference on Computer Vision, ICCV 2017 - Venice, Italy
Duration: 22 Oct 201729 Oct 2017

Publication series

NameProceedings of the IEEE International Conference on Computer Vision
Volume2017-October
ISSN (Print)1550-5499

Conference

Conference16th IEEE International Conference on Computer Vision, ICCV 2017
Country/TerritoryItaly
CityVenice
Period22/10/1729/10/17

Fingerprint

Dive into the research topics of 'Click Here: Human-Localized Keypoints as Guidance for Viewpoint Estimation'. Together they form a unique fingerprint.

Cite this