Saturday, October 1, 4:00 pm — 5:30 pm (Rm 409A)
XY-Stereo Capture and Up-Conversion for Virtual Reality—Nicolas Tsingos, Dolby Labs - San Francisco, CA, USA; Cong Zhou, Dolby Laboratories - San Francisco, CA, USA; Abhay Nadkarni, Dolby Labs - San Francisco, CA, USA
We propose a perceptually-based approach to creating immersive soundscapes for VR applications. We leverage stereophonic content obtained from XY microphones as a basic building block that can be easily recorded, edited, and combined to provide a more compelling experience than can be obtained from recording at a single location. Central to our approach is a novel up-conversion algorithm that derives a nearly full-spherical parametric soundfield, including height information, from an XY recording. This approach enables a simpler, improved capture, when compared to alternative soundfield recording techniques. It can also take advantage of new object-based delivery formats for flexible delivery and playback.
This session is part of the co-located AVAR Conference which is not included in the normal convention All Access badge.
Augmented Reality Headphone Environment Rendering—Jean-Marc Jot, DTS, Inc. - Los Gatos, CA, USA; Keun Sup Lee, Apple Inc. - Cupertino, CA, USA
In headphone-based augmented reality audio applications, computer-generated audio-visual objects are rendered over headphones or ear buds and blended into a natural audio environment. This requires binaural artificial reverberation processing to match local environment acoustics, so that synthetic audio objects are not distinguishable from sounds occurring naturally or reproduced over loudspeakers. Solutions involving the measurement or calculation of binaural room impulse responses in a consumer environment are limited by practical obstacles and complexity. We propose an approach exploiting a statistical reverberation model, enabling practical acoustical environment characterization and computationally efficient reflection and reverberation rendering for multiple virtual sound sources. The method applies equally to headphone-based “audio-augmented reality”–enabling natural-sounding, externalized virtual 3-D audio reproduction of music, movie or game soundtracks.
This session is part of the co-located AVAR Conference which is not included in the normal convention All Access badge.
Capturing and Rendering 360º VR Audio Using Cardioid Microphones—Hyunkook Lee, University of Huddersfield - Huddersfield, UK
This paper proposes a new microphone technique and a binaural rendering approach for 360º VR audio. Four cardioid microphones are arranged in a horizontal square, with 30 cm spacing and 90º subtended angle for each of the four pairs of adjacent microphones, in order to obtain the stereophonic recording angle (SRA) of 90º for a quadraphonic loudspeaker reproduction. The signals are binaurally synthesized with quadraphonic read-related impulse responses. This allows production of the same SRA for each of the four 90º segments whenever the listener rotates his or her head by 90º in a VR environment with a head-tracker, which is confirmed by a listening test. For vertical sound capturing, upward- and optional downward-facing cardioid microphones are added.
This session is part of the co-located AVAR Conference which is not included in the normal convention All Access badge.