Friday, September 30, 3:15 pm — 5:15 pm (Rm 409A)
Efficient, Compelling, and Immersive VR Audio Experience Using Scene Based Audio/Higher Order Ambisonics—Shankar Shivappa, Qualcomm Technologies Inc. - San Diego, CA, USA; Martin Morrell, Qualcomm Technologies Inc. - San Diego, CA, USA; Deep Sen, Qualcomm Technologies Inc. - San Diego, CA, USA; Nils Peters, Qualcomm, Advanced Tech R&D - San Diego, CA, USA; S. M. Akramus Salehin, Qualcomm Technologies Inc. - San Diego, CA, USA
Scene-based audio (SBA) also known as Higher Order Ambisonics (HOA) combines the advantages of object-based and traditional channel-based audio schemes. It is particularly suitable for enabling a truly immersive (360, 180) VR audio experience. SBA signals can be efficiently rotated and binauralized. This makes realistic VR audio practical on consumer devices. SBA also provides conductive mechanisms for acquiring live soundfields for VR. MPEG-H is a newly adopted compression standard that can efficiently compress HOA for transmission and storage purposes. It is the only known standard that provides compressed HOA end-to-end. Our paper describes a practical end-to-end chain for SBA/HOA based VR audio. Given its advantages over other formats, SBA should be “the format of choice” for a compelling VR audio experience.
This session is part of the co-located AVAR Conference which is not included in the normal convention All Access badge.
Soundfield Navigation using an Array of Higher-Order Ambisonics Microphones—Joseph G. Tylka, Princeton University - Princeton, NJ, USA; Edgar Choueiri, Princeton University - Princeton, NJ, USA
A method is presented for soundfield navigation through estimation of the spherical harmonic coefficients (i.e., the higher-order ambisonics signals) of a soundfield at a position within an array of two or more ambisonics microphones. An existing method based on blind source separation is known to suffer from audible artifacts, while an alternative method, in which a weighted average of the ambisonics signals from each microphone is computed, is shown to necessarily introduce comb-filtering and degrade localization for off-center sources. The proposed method entails computing a regularized least-squares estimate of the soundfield at the listening position using the signals from the nearest microphones, excluding those that are nearer to a source than to the listening position. Simulated frequency responses and predicted localization errors suggest that, for interpolation between a pair of microphones, the proposed method achieves both accurate localization and minimal spectral coloration when the product of angular wavenumber and microphone spacing is less than twice the input expansion order. It is also demonstrated that failure to exclude from the calculation those microphones that are nearer to a source than to the listening position can significantly degrade localization accuracy.
This session is part of the co-located AVAR Conference which is not included in the normal convention All Access badge.
Immersive Audio Rendering for Interactive Complex Virtual Architectural Environments—Imran Muhammad, Hanyang University - Seoul, Korea; Jin Yong Jeon, Hanyang University - Seoul, Korea; Acoustics Authorized - Seoul, Korea
In this study we investigate methods for sound propagation in virtual complex architectural environments for spatialized audio rendering to use in immersive virtual reality (VR) scenarios. During the last few decades, sound propagation models have been designed and investigated for complex building structures using geometrical approach (GA) and hybrid techniques. For sound propagation, it is required to design fast simulation tools to incorporate a sufficient number of dynamically moving sound sources, room acoustical properties, and reflections and diffraction from interactively changing surface elements in VR environments. Using physically based models, we achieved a reasonable trade-off between sound quality and system performance. Furthermore, we describe the sound rendering pipeline into a virtual scene to simulate virtual environment.
This session is part of the co-located AVAR Conference which is not included in the normal convention All Access badge.
Immersive Audio for VR—Joel Susal, Dolby Laboratories - San Francisco, CA, USA; Kurt Krauss, Dolby Germany GmbH - Nuremberg, Germany; Nicolas Tsingos, Dolby Labs - San Francisco, CA, USA; Marcus Altman, Dolby Laboratories - San Francisco, CA, USA
Object based sound creation, packaging, and playback of content is now prevalent in the Cinema and Home Theater, delivering immersive audio experiences. This has paved the way for Virtual Reality sound where precision of sound is necessary for complete immersion in a virtual world.
This session is part of the co-located AVAR Conference which is not included in the normal convention All Access badge.