AES London 2010
Paper Session P21
Tuesday, May 25, 09:00 — 13:00
(Room C3)
P21 - Multichannel and Spatial Audio: Part 2
Chair: Ronald Aarts
P21-1 Center-Channel Processing in Virtual 3-D Audio Reproduction Over Headphones or Loudspeakers—Jean-Marc Jot, Martin Walsh, DTS Inc. - Scotts Valley, CA, USA
Virtual 3-D audio processing systems for the spatial enhancement of recordings reproduced over headphones or frontal loudspeakers generally provide a less compelling effect on center-panned sound components. This paper examines this deficiency and presents virtual 3-D audio processing algorithm modifications that provide a compelling spatial enhancement effect over headphones or loudspeakers even for sound components localized in the center of the stereo image, ensure the preservation of the timbre and balance in the original recording, and produce a more stable “phantom center” image over loudspeakers. The proposed improvements are applicable, in particular, in laptop and TV set audio systems, mobile internet devices, and home theater “soundbar” loudspeakers.
Convention Paper 8116 (Purchase now)
P21-2 Parametric Representation of Complex Sources in Reflective Environments—Dylan Menzies, De Montfort University - Leicester, UK
Aspects of source directivity in reflective environments are considered, including the audible effects of directivity and how these can be reproduced. Different methods of encoding and production are presented, leading to a new approach to extend parametric encoding of reverberation, as described in the DIRAC and MPEG formats, to include the response to source directivity.
Convention Paper 8118 (Purchase now)
P21-3 Analysis and Improvement of Pre-Equalization in 2.5-Dimensional Wave Field Synthesis—Sascha Spors, Jens Ahrens, Technische Universität Berlin - Berlin, Germany
Wave field synthesis (WFS) is a well established high-resolution spatial sound reproduction technique. Typical WFS systems aim at the reproduction in a plane using loudspeakers enclosing the plane. This constitutes a so-called 2.5-dimensional reproduction scenario. It has been shown that a spectral correction of the reproduced wave field is required in this context. For WFS this correction is known as pre-equalization filter. The derivation of WFS is based on a series of approximations of the physical foundations. This paper investigates on the consequences of these approximations on the reproduced sound field and in particular on the pre-equalization filter. An exact solution is provided by the recently presented spectral division method and is employed in order to derive an improved WFS driving function. Furthermore, the effects of spatial sampling and truncation on the pre-equalization are discussed.
Convention Paper 8121 (Purchase now)
P21-4 Discrete Wave Field Synthesis Using Fractional Order Filters and Fractional Delays—César D. Salvador, Universidad de San Martin de Porres - Lima, Peru
A discretization of the generalized 2.5D Wave Field Synthesis driving functions is proposed in this paper. Time discretization is applied with special attention to the prefiltering that involves half-order systems and to the delaying that involves fractional-sample delays. Space discretization uses uniformly distributed loudspeakers along arbitrarily shaped contours: visual and numerical comparisons between lines and convex arcs, and between squares and circles, are shown. An immersive soundscape composed of nature sounds is reported as an example. Modeling uses MATLAB and real-time reproduction uses Pure Data. Simulations of synthesized plane and spherical wave fields, in the whole listening area, report a discretization percentage error of less than 1%, using 16 loudspeakers and 5th order IIR prefilters.
Convention Paper 8122 (Purchase now)
P21-5 Immersive Virtual Sound Beyond 5.1 Channel Audio—Kangeun Lee, Changyong Son, Dohyung Kim, Samsung Advanced Institute of Technology - Suwon, Korea
In this paper a virtual sound system is introduced for the next generation multichannel audio. The sound system provides a 9.1 channel surround sound via a conventional 5.1 loudspeaker layout and contents. In order to deliver 9.1 sound, the system includes channel upmixing and vertical sound localization that can create virtually localized sound at any spherical surface around human head. An amplitude panning coefficient is used to channel upmixing that includes a smoothing technique to reduce musical noise occurred by upmixing. The proposed vertical rendering is based on VBAP (vector based amplitude panning) using three loudspeakers among the 5.1. For quality test, our upmixing and virtual rendering method is evaluated in real 9.1 and 5.1 loudspeaker respectively and compared with Dolby Pro Logic IIz. The demonstrated performance is superior to the references.
Convention Paper 8117 (Purchase now)
P21-6 Acoustical Zooming Based on a Parametric Sound Field Representation—Richard Schultz-Amling, Fabian Kuech; Oliver Thiergart, Markus Kallinger, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
Directional audio coding (DirAC) is a parametric approach to the analysis and reproduction of spatial sound. The DirAC parameters, namely direction-of-arrival and diffuseness of sound can be further exploited in modern teleconferencing systems. Based on the directional parameters, we can control a video camera to automatically steer on the active talker. In order to provide consistency between the visual and acoustical cues, the virtual recording position should match the visual movement. In this paper we present an approach for an acoustical zoom, which provides audio rendering that follows the movement of the visual scene. The algorithm does not rely on a priori information regarding the sound reproduction system as it operates directly in the DirAC parameter domain.
Convention Paper 8120 (Purchase now)
P21-7 SoundDelta: A Study of Audio Augmented Reality Using WiFi-Distributed Ambisonic Cell Rendering—Nicholas Mariette, Brian F. G. Katz, LIMSI-CNRS - Orsay, France; Khaled Boussetta, Université Paris 13 - Paris, France; Olivier Guillerminet, REMU - Paris, France
SoundDelta is an art/research project that produced several public audio augmented reality art-works. These spatial soundscapes were comprised of virtual sound sources located in a designated terrain such as a town square. Pedestrian users experienced the result as interactive binaural audio by walking through the augmented terrain, using headphones and the SoundDelta mobile device. SoundDelta uses a distributed "Ambisonic cell" architecture that scales efficiently for many users. A server renders Ambisonic audio for fixed user positions, which is streamed wirelessly to mobile users that render a custom, individualized binaural mix for their present position. A spatial cognition mapping experiment was conducted to validate the soundscape perception and compare with an individual rendering system.
Convention Paper 8123 (Purchase now)
P21-8 Surround Sound Panning Technique Based on a Virtual Microphone Array—Filippo M. Fazi, University of Southampton - Southampton, UK; Toshiro Yamada, Suketu Kamdar, University of California, San Diego - La Jolla, CA, USA; Philip A. Nelson, University of Southampton - Southampton, UK; Peter Otto, University of California, San Diego - La Jolla, CA, USA
A multichannel panning technique is presented, which aims at reproducing a plane wave with an array of loudspeakers. The loudspeaker gains are computed by solving an acoustical inverse problem. The latter involves the inversion of a matrix of transfer functions between the loudspeakers and the elements of a virtual microphone array, the center of which corresponds to the location of the listener. The radius of the virtual microphone array is varied consistently with the frequency, in such a way that the transfer function matrix is independent of the frequency. As a consequence, the inverse problem is solved for one frequency only, and the loudspeaker coefficients obtained can be implemented using simple gains.
Convention Paper 8119 (Purchase now)