|
v3.0, 20040325, ME
Session Z7 Monday, May10 12:30 h14:00 h
Posters: Multichannel Sound & Wave Field Synthesis
Multichannel Sound
Z7-1 Coding Strategies and Quality Measure for Multichannel AudioSoledad Torres-Guijarro1, Jon Ander Beracoechea-Álava1, Isidoro Pérez-García2, F. Javier Casajús-Quirós1
1 Universidad Politécnica de Madrid, Madrid, Spain
2 European University of Madrid, Madrid, Spain
The Karhunen-Loeve Transform (KLT) has proven to be an efficient method of decorrelating multichannel signals prior to coding. Careful bit-rate distribution among decorrelated channels reduce the overall bit rate. In order to explore how bits are distributed in the coding process, a new quality measure of the reconstructed sound field is proposed; the binaural signal that the listener would obtain in a real environment is synthesized and evaluated by means of the standard Perceptual Audio Quality Measure (PEAQ). Results on codification via AAC with different kind of audio signals, bit allocations, and multichannel arrangements are reported.
Z7-2 An Improvement in Sound Quality of LFE Flattening Group DelayShintaro Hosoi1, Hiroyuki Hamada1, Nobuo Kameyama2
1 Pioneer Corporation, Tokorozawa, Saitama, Japan
2 NRP Ltd., Tokorozawa, Saitama, Japan
In this paper we raise the issue of bass reproduction of surround music, when using LFE. We show that this issue originates from the method of creating an LFE. Therefore, we propose the practicable method of LFE phase sync, that improves the quality of bass by applying the proper amount of delay. The optimum delay is calculated for using this method for various cutoffs and order of filters. We introduce the manner in which this method can be used for actual recording projects, and mention the method for monitoring when an encoder is used.
Z7-3 High Spatial Resolution Multichannel RecordingArnaud Laborie, Rémy Bruno, Sébastien Montoya, Trinnov Audio, Paris, France
Multichannel recording is certainly one of the most important remaining issues concerning todays sound techniques. A good surround recording is extremely difficult to obtain because it must fulfill a number of conditions including envelopment feeling, accurate localization, and a large sweet spot without compromising the timbres. Advanced signal processing allows one to obtain directivities designed from panning laws that have been designed to optimally drive any multichannel layout. This paper presents the underlying concept of High Spatial Resolution, the spatial equivalent for High Fidelity, and points out why this is a key point to achieve high spatial quality. Actual performances of such a High Spatial Resolution 5.0 microphone featuring a small array of 8 omnidirectional capsules are fully simulated and measured.
Wave Field Synthesis
Z7-4 Wave Field Synthesis: Mixing and Mastering Tools for Digital Audio WorkstationsRenato Pellegrini, Clemens Kuhn, sonicEmotion AG, Dielsdorf Switzerland
Wave Field Synthesis (WFS) provides holographic sound reproduction for a large listening area. Fundamentals of WFS recording and reproduction techniques have been developed in the past few years; however there is a lack of intuitive tools for WFS mixing and mastering. In this paper the authors propose a WFS user interface compatible with available and accepted digital audio workstations. These WFS-plug-ins are based on a novel audio network technology. They open new possibilities for creative audio production in WFS.
Z7-5 Generation of Highly Immersive Atmospheres for Wave Field Synthesis Reproduction Andreas Wagner1, 2, Andreas Walther1, 2, Frank Melchior2, Michael Strauss2
1 Technical University Ilmenau, Ilmenau, Germany
2 Fraunhofer Institute for Digital Media Technology IDMT, Ilmenau, Germany
Wave Field Synthesis (WFS) permits the reproduction of a sound field, which fills nearly the whole reproduction room with correct localization and spatial impression. This technology enables a correct spatial sound reproduction with a proper localization over a wide listening area. So far, this technique has been mainly used and demonstrated for music reproduction. Because of its properties, WFS is ideal for the creation of sound for motion picture or virtual reality applications. In both cases the creation of highly immersive atmospheres is important to give the auditorium the illusion of being a part of the auditory scene. In this paper a new approach in designing immersive atmospheres (e.g., rain) using wave field synthesis reproduction is presented. New tools and techniques to control and generate these atmospheres have been developed and investigated in listening tests.
Z7-6 Efficient Active Listening Room Compensation for Wave Field SynthesisSascha Spors, Herbert Buchner, Rudolf Rabenstein, University of Erlangen-Nuremberg, Erlangen, Germany
Wave field synthesis is an auralization technique which allows control of the entire wave field within the entire listening area. However, reflections in the listening room interfere with the auralized wave field and may impair the spatial reproduction. Active listening room compensation aims at reducing these impairments by using the playback system. Due to the high number of playback channels used for wave field synthesis, the existing approaches to room compensation are not applicable. A novel approach to active room compensation overcomes these problems by a transformation from the space-time to the wave domain and application of wave-domain adaptive filtering.
Z7-7 Full-Duplex Systems for Sound Field Recording and Auralization Based on Wave Field SynthesisHerbert Buchner, Sascha Spors, Walter Kellermann, University of Erlangen-Nuremberg, Erlangen, Germany
For high-quality multimedia communication systems such as telecollaboration or virtual reality applications, both multichannel sound reproduction and full duplex capability are highly desirable. Full 3-D sound spatialization over a large listening area is offered by wave field synthesis, where arrays of loudspeakers generate a prespecified sound field. However, before this new technique can be utilized for full-duplex systems with microphone arrays and loudspeaker arrays, an efficient solution to the problem of multichannel acoustic echo cancellation (MCAEC) has to be found in order to avoid acoustic feedback. This paper presents a novel approach that extends the current state of the art of MCAEC and transform domain adaptive filtering by reconciling the flexibility of adaptive filtering and the underlying physics of acoustic waves in a systematic and efficient way. Our new framework of wave-domain adaptive filtering (WDAF) explicitly takes into account the spatial dimensions of the closely spaced loudspeaker and microphone arrays. Experimental results with a 32-channel AEC verify the concept for both simulated and actually measured room acoustics.
Z7-8 Equalization of Wave Field Synthesis Systems Andreas Apel, Thomas Röder, Sandra Brix, Fraunhofer Institute for Digital Media Technology IDMT, Ilmenau, Germany
Wave Field Synthesis allows the reproduction of arbitrary wave fields in a large listening area. The theoretical driving function for the loudspeaker states that a correction filter must be implemented to get a flat frequency response of the system. Practical implementations require an adaptation of the filter to the current source position. In the current paper measurements of frequency responses for different source positions are compared. Based on those measurements a method for a proper equalization of the system is proposed. Finally, results of listening tests are shown, which compare the quality of a position-dependent filtering with a position-independent filtering.
|