Discriminative hand localization in depth images

Max Ehrlich, Philippos Mordohai

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We present a novel hand localization technique for 3D user interfaces. Our method is designed to overcome the difficulty of fitting anatomical models which fail to converge or converge with large errors in complex scenes or suboptimal imagery. We learn a discriminative model of the hand from depth images by using fast to compute features and a Random Forest classifier. The learned model is then combined with a spatial clustering algorithm to localize the hand position. We propose three formulations of low-level image features for use in model training. We evaluate the performance of our method by testing on low resolution depth maps of users two to three meters from the sensor in natural poses. Our method can detect an arbitrary number of hands per scene and preliminary results show that it is robust to suboptimal imagery.

Original languageEnglish
Title of host publication2016 IEEE Symposium on 3D User Interfaces, 3DUI 2016 - Proceedings
EditorsRob Lindeman, Bruce H. Thomas, Maud Marchal
Pages239-240
Number of pages2
ISBN (Electronic)9781509008421
DOIs
StatePublished - 26 Apr 2016
Event11th IEEE Symposium on 3D User Interfaces, 3DUI 2016 - Greenville, United States
Duration: 19 Mar 201620 Mar 2016

Publication series

Name2016 IEEE Symposium on 3D User Interfaces, 3DUI 2016 - Proceedings

Conference

Conference11th IEEE Symposium on 3D User Interfaces, 3DUI 2016
Country/TerritoryUnited States
CityGreenville
Period19/03/1620/03/16

Keywords

  • I.4.7 [Image Processing and Computer Vision]: Feature Measurement - Feature representation
  • I.4.8 [Image Processing and Computer Vision]: Scene Analysis - Depth Cues

Fingerprint

Dive into the research topics of 'Discriminative hand localization in depth images'. Together they form a unique fingerprint.

Cite this