Audio Engineering Society AES New York 2013

AES New York 2013
Paper Session P16

P16 - Spatial Audio—Part 2


Sunday, October 20, 9:00 am — 12:00 pm (Room 1E07)

Chair:
Jean-Marc Jot, DTS, Inc. - Los Gatos, CA, USA

P16-1 Defining the Un-Aliased Region for Focused SourcesRobert Oldfield, University of Salford - Salford, Greater Manchester, UK; Ian Drumm, University of Salford - Salford, Greater Manchester, UK
Sound field synthesis reproduction techniques such as wave field synthesis can accurately reproduce wave fronts of arbitrary curvature, including sources with the wave fronts of a source in front of the array. The wave fronts are accurate up until the spatial aliasing frequency, above which there are no longer enough secondary sources (loudspeakers) to reproduce the wave front accurately, resulting in spatial aliasing contribution manifesting as additional wave fronts propagating in directions other than intended. These contributions cause temporal, spectral, and spatial errors in the reproduced wave front. Focused sources (sources in front of the loudspeaker array) have a unique attribute in this sense in that there is a clearly defined region around the virtual source position that exhibits no spatial aliasing contributions even at an extremely high frequency. This paper presents a method for the full characterization of this un-aliased region using both a ray-based propagation model and a time domain approach.
Convention Paper 9001 (Purchase now)

P16-2 Using Ambisonics to Reconstruct Measured SoundfieldsSamuel W. Clapp, Rensselaer Polytechnic Institute - Troy, NY, USA; Anne E. Guthrie, Rensselaer Polytechnic Institute - Troy, NY, USA; Arup Acoustics - New York, NY, USA; Jonas Braasch, Rensselaer Polytechnic Institute - Troy, NY, USA; Ning Xiang, Rensselaer Polytechnic Institute - Troy, NY, USA
Spherical microphone arrays can measure a soundfield's spherical harmonic components, subject to certain bandwidth constraints depending on the array radius and the number and placement of the array's sensors. Ambisonics is designed to reconstruct the spherical harmonic components of a soundfield via a loudspeaker array and also faces certain limitations on its accuracy. This paper looks at how to reconcile these sometimes conflicting limitations to produce the optimum solution for decoding. In addition, binaural modeling is used as a method of evaluating the proposed decoding method and the accuracy with which it can reproduce a measured soundfield.
Convention Paper 9002 (Purchase now)

P16-3 Subjective Evaluation of Multichannel Sound with Surround-Height ChannelsSungyoung Kim, Rochester Institute of Technology - Rochester, NY, USA; Doyuen Ko, Belmont University - Nashville, TN, USA; McGill University - Montreal, Quebec, Canada; Aparna Nagendra, Rochester Institute of Technology - Rochester, NY, USA; Wieslaw Woszczyk, McGill University - Montreal, QC, Canada
In this paper we report results from an investigation of listener perception of surround-height channels added to standard multichannel stereophonic reproduction. An ITU-R horizontal loudspeaker configuration was augmented by the addition of surround-height loudspeakers in order to reproduce concert hall ambience from above the listener. Concert hall impulse responses (IRs) were measured at three heights using an innovative microphone array designed to capture surround-height ambience. IRs were then convolved with anechoic music recordings in order to produce seven-channel surround sound stimuli. Listening tests were conducted in order to determine the perceived quality of surround-height channels as affected by three loudspeaker positions and three IR heights. Fifteen trained listeners compared each reproduction condition and ranked them based on their degree of appropriateness. Results indicate that surround-height loudspeaker position has a greater influence on perceived sound quality than IR height. Listeners considered the naturalness, spaciousness, envelopment, immersiveness, and dimension of the reproduced sound field when making judgments of surround-height channel quality.
Convention Paper 9003 (Purchase now)

P16-4 A Perceptual Evaluation of Recording, Rendering, and Reproduction Techniques for Multichannel Spatial AudioDavid Romblom, McGill University - Montreal, Quebec, Canada; Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) - Montreal, Quebec, Canada; Richard King, McGill University - Montreal, Quebec, Canada; The Centre for Interdisciplinary Research in Music Media and Technology - Montreal, Quebec, Canada; Catherine Guastavino, McGill University - Montreal, Quebec, Canada; The Centre for Interdisciplinary Research in Music Media and Technology - Montreal, Quebec, Canada
The objective of this project is to perceptually evaluate the relative merits of two different spatial audio recording and rendering techniques within the context of two different multichannel reproduction systems. The two recordings and rendering techniques are "natural," using main microphone arrays, and "virtual," using spot microphones, panning, and simulated acoustic delay. The two reproduction systems are the 3/2 system (5.1 surround) and a 12/2 system, where the frontal L/C/R triplet is replaced by a 12-loudspeaker linear array. The perceptual attributes of multichannel spatial audio have been established by previous authors. In this study magnitude ratings of selected spatial audio attributes are presented for the above treatments and results are discussed.
Convention Paper 9004 (Purchase now)

P16-5 The Optimization of Wave Field Synthesis for Real-Time Sound Sources Rendered in Non-Anechoic EnvironmentsIan Drumm, University of Salford - Salford, Greater Manchester, UK; Robert Oldfield, University of Salford - Salford, Greater Manchester, UK
Presented here is a technique that employs audio capture and adaptive recursive filter design to render in real time dynamic, interactive, and content rich soundscapes within non-anechoic environments. Typically implementations of wave field synthesis utilize convolution to mitigate for the amplitude errors associated with the application of linear loudspeaker arrays. Although recursive filtering approaches have been suggested before, this paper aims to build on the work by presenting an approach that exploits Quasi Newton adaptive filter design to construct components of the filtering chain that help compensate for both the particular system configuration and mediating environment. Early results utilizing in-house developed software running on a 112-channel wave field synthesis system show the potential to improve the quality of real-time 3-D sound rendering in less than ideal contexts.
Convention Paper 9005 (Purchase now)

P16-6 A Perceptual Evaluation of Room Effect Methods for Multichannel Spatial AudioDavid Romblom, McGill University - Montreal, Quebec, Canada; Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) - Montreal, Quebec, Canada; Richard King, McGill University - Montreal, Quebec, Canada; The Centre for Interdisciplinary Research in Music Media and Technology - Montreal, Quebec, Canada; Catherine Guastavino, McGill University - Montreal, Quebec, Canada; The Centre for Interdisciplinary Research in Music Media and Technology - Montreal, Quebec, Canada
The room effect is an important aspect of sound recording technique and is typically captured separately from the direct sound. The perceptual attributes of multichannel spatial audio have been established by previous authors, while the psychoacoustic underpinnings of room perception are known to varying degrees. The Hamasaki Square, in combination with a delay plan and an aesthetic disposition to "natural" recordings, is an approach practiced by some sound recording engineers. This study compares the Hamasaki Square to an alternative room effect and to dry approaches in terms of a number of multichannel spatial audio attributes. A concurrent experiment investigated the same spatial audio attributes with regard to the microphone and reproduction approach. As such, the current study uses a 12/2 system based upon 3/2 (5.1 surround) where the frontal L/C/R triplet has been replaced by a linear wavefront reconstruction array. AES 135th Convention Student Technical Papers Award Cowinner
Convention Paper 9006 (Purchase now)


Return to Paper Sessions

AES - Audio Engineering Society