TY - GEN
T1 - P-HRTF
T2 - 13th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2014
AU - Meshram, Alok
AU - Mehra, Ravish
AU - Yang, Hongsheng
AU - Dunn, Enrique
AU - Franm, Jan Michael
AU - Manocha, Dinesh
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2014/11/5
Y1 - 2014/11/5
N2 - Accurate rendering of 3D spatial audio for interactive virtual auditory displays requires the use of personalized head-related transfer functions (HRTFs). We present a new approach to compute personalized HRTFs for any individual using a method that combines state-of-the-art image-based 3D modeling with an efficient numerical simulation pipeline. Our 3D modeling framework enables capture of the listener's head and torso using consumer-grade digital cameras to estimate a high-resolution non-parametric surface representation of the head, including the extended vicinity of the listener's ear. We leverage sparse structure from motion and dense surface reconstruction techniques to generate a 3D mesh. This mesh is used as input to a numeric sound propagation solver, which uses acoustic reciprocity and Kirchhoff surface integral representation to efficiently compute an individual's personalized HRTF. The overall computation takes tens of minutes on multi-core desktop machine. We have used our approach to compute the personalized HRTFs of few individuals, and we present our preliminary evaluation here. To the best of our knowledge, this is the first commodity technique that can be used to compute personalized HRTFs in a lab or home setting.
AB - Accurate rendering of 3D spatial audio for interactive virtual auditory displays requires the use of personalized head-related transfer functions (HRTFs). We present a new approach to compute personalized HRTFs for any individual using a method that combines state-of-the-art image-based 3D modeling with an efficient numerical simulation pipeline. Our 3D modeling framework enables capture of the listener's head and torso using consumer-grade digital cameras to estimate a high-resolution non-parametric surface representation of the head, including the extended vicinity of the listener's ear. We leverage sparse structure from motion and dense surface reconstruction techniques to generate a 3D mesh. This mesh is used as input to a numeric sound propagation solver, which uses acoustic reciprocity and Kirchhoff surface integral representation to efficiently compute an individual's personalized HRTF. The overall computation takes tens of minutes on multi-core desktop machine. We have used our approach to compute the personalized HRTFs of few individuals, and we present our preliminary evaluation here. To the best of our knowledge, this is the first commodity technique that can be used to compute personalized HRTFs in a lab or home setting.
UR - http://www.scopus.com/inward/record.url?scp=84945117975&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84945117975&partnerID=8YFLogxK
U2 - 10.1109/ISMAR.2014.6948409
DO - 10.1109/ISMAR.2014.6948409
M3 - Conference contribution
AN - SCOPUS:84945117975
T3 - ISMAR 2014 - IEEE International Symposium on Mixed and Augmented Reality - Science and Technology 2014, Proceedings
SP - 53
EP - 61
BT - ISMAR 2014 - IEEE International Symposium on Mixed and Augmented Reality - Science and Technology 2014, Proceedings
A2 - Lindeman, Robert W.
A2 - Sandor, Christian
A2 - Julier, Simon
Y2 - 10 September 2014 through 12 September 2014
ER -