AES New York 2017
Spatial Audio Track Event SA06
Thursday, October 19, 3:00 pm — 4:00 pm (Rm 1E09)
Spatial Audio: SA06 - Perceptual Thresholds of Spatial Audio Latency for Dynamic Virtual Auditory Environments
Presenter:Ravish Mehra, Oculus Research - Redmond, WA, USA
Generating the acoustic signals that reproduce the properties of natural environments through headphones remains a significant technical challenge. One hurdle is related to the time it takes to update the signal each time the observer moves. The end-to-end spatial audio latency (SAL) is the time elapsed between the listener assuming a new position and the updated sound being delivered to their ears. It is comprised of latencies in head-tracking, HRTF interpolation and filtering, operating system callback, audio driver and hardware (D/A conversion) buffering, and other parts of the signal processing chain. Because SAL is currently inevitable, it is important to know what SAL is detectable to set minimum thresholds for SAL in virtual auditory environments.
We used a 2-interval-forced-choice paradigm to measure SAL detectability at (10 and 60 degree) azimuths, both with and without the presence of co-located visual stimuli. Overall, mean SAL thresholds were between 128ms and 158ms. Consistent with results from minimum audible motion angle data, thresholds were greater at larger azimuthal positions. A retrospective analysis revealed that listeners who strategically varied the velocity, acceleration and rate of their head rotation were better able to perform the task. This suggests that thresholds for SAL will be lower for applications where users are expected to move their heads more rapidly and abruptly. Results are discussed in the context of prior research and the potential implications for rendering Virtual Reality audio.