Audio Engineering Society AES New York 2015

AES New York 2015
Paper Session P7

P7 - Perception—Part 2


Friday, October 30, 9:00 am — 12:00 pm (Room 1A07)

Chair:
Sungyoung Kim, Rochester Institute of Technology - Rochester, NY, USA

P7-1 In-Vehicle Audio System Sound Quality Preference StudyPatrick Dennis, Nissan North America - Farmington Hills, MI, USA
In-vehicle audio systems present a unique listening environment. Listeners were asked to adjust the relative bass and treble levels as well as fade and balance levels based on preference on three music programs reproduced through a high quality in-vehicle audio system. The audio system frequency response was initially tuned to a frequency spectrum similar to that preferred for in-room loudspeakers. The fade control was initially set to give a frontal image with some rear envelopment using two different rear speaker locations, rear deck and rear door, while the balance control was set to give a center image between the center of the steering wheel and rearview mirror. Stage height was located on top of the instrument panel (head level). Results showed that on average listeners preferred +13 dB bass and –2 dB treble compared to a flat response while fade was +3.5 dB rearward for rear deck mounted speakers,+2.6 dB rearward for rear door mounted, and balance was 0 dB. Significant variations between individual listeners were observed.
Convention Paper 9393 (Purchase now)

P7-2 Adapting Audio Quality Assessment Procedures for Engineering PracticeJan Berg, Luleå University of Technology - Piteå, Sweden; Nyssim Lefford, Luleå University of Technology - Luleå, Sweden
Audio quality is of concern up and down the production chain from content creation to distribution. The technologies employed at each step— equipment, processors like codecs, downmix algorithms, and loudspeakers—all are scrutinized for their impact. The now well-established field of audio quality research has developed robust methods for assessments. To form a basis for this work, research has investigated how perceptual dimensions are formed and expressed. The literature includes numerous sonic attributes that may be used to evaluate audio quality. All together, these findings have provided benchmarks and guidelines for improving audio technology, setting standards in the manufacture of sound and recording equipment and furthering the design of reproduction systems and spaces. They are, however, by comparison rarely used to inform recording and mixing practice. In this paper quality evaluation and mixing practice are compared on selected counts and observations are made on what points these fields may mutually inform one another.
Convention Paper 9394 (Purchase now)

P7-3 Perception and Automated Assessment of Audio Quality in User Generated ContentBruno Fazenda, University of Salford - Salford, Greater Manchester, UK; Paul Kendrick, University of Salford - Salford, UK; Trevor Cox, University of Salford - Salford, UK; Francis Li, University of Salford - Salford, UK; Iain Jackson, University of Manchester - Manchester, UK
Many of us now carry around technologies that allow us to record sound, whether that is the sound of our child's first music concert on a digital camera or a recording of a practical joke on a mobile phone. However, the production quality of the sound on user-generated content is often very poor: distorted, noisy, with garbled speech or indistinct music. This paper reports the outcomes of a three-year research project on assessment of quality from user generated recordings. Our interest lies in the causes of the poor recording, especially what happens between the sound source and the electronic signal emerging from the microphone. We have investigated typical problems: distortion; wind noise, microphone handling noise, and frequency response. From subjective tests on the perceived quality of such errors and signal features extracted from the audio files we developed perceptual models to automatically predict the perceived quality of audio streams unknown to the model. It is shown that perceived quality is more strongly associated with distortion and frequency response, with wind and handling noise being just slightly less important. The work presented here has applications in areas such as perception and measurement of audio quality, signal processing, and feature detection and machine learning.
Convention Paper 9395 (Purchase now)

P7-4 Compensating for Tonal Balance Effects Due to Acoustic Cross Talk Removal while Listening with HeadphonesBob Schulein, RBS Consultants - Schaumburg, IL, USA
With the large number of headphones now in use, a preponderance of recorded music mixed with loudspeakers is experienced while listening with headphones. It is well known that the headphone experience creates a difference in spatial perception due to the fact that the crosstalk normally associated with loudspeaker listening is eliminated, resulting in a widening of the perceived sound stage. In addition to this difference, a question arises as to changes in the perceived tonal balance that may occur with the removal of acoustic crosstalk. This paper presents a method of measuring such differences based on a series of near field binaural mannequin recordings for which the spectral influence of crosstalk is determined. Measurement data is presented as to the findings of this investigation. Results suggest that headphones designed to sound well balanced for most popular music benefit from a low frequency boost in frequency response, whereas headphones designed primarily for classical listening require less boost.
Convention Paper 9396 (Purchase now)

P7-5 The Use of Microphone Level Balance in Blending the Timbre of Horn and Bassoon PlayersSven-Amin Lembke, McGill University - Montreal, Quebec, Canada; De Montfort University - Leicester, UK; Scott Levine, Skywalker Sound; Martha de Francisco, McGill University - Montreal, QC, Canada; Stephen McAdams, McGill University - Montreal, Quebec, Canada
A common musical aim of orchestration is to achieve a blended timbre for certain instrument combinations. Its success has been shown to also depend on the timbral coordination between musicians during performance, which this study extends by adding the subsequent involvement of sound engineers. We report the results from a production experiment in which sound engineers mixed independent feeds for a main and two spot microphones to blend the timbre of pairs of bassoon and horn players in a two-channel stereo mix. The balance of microphone feeds can be shown to be affected by leadership roles between performers, the musical material, and aspects related to room acoustics and performer characteristics.
Convention Paper 9397 (Purchase now)

P7-6 101 Mixes: A Statistical Analysis of Mix-Variation in a Dataset of Multi-Track Music MixesAlex Wilson, University of Salford - Salford, Greater Manchester, UK; Bruno Fazenda, University of Salford - Salford, Greater Manchester, UK
The act of mix-engineering is a complex combination of creative and technical processes; analysis is often performed by studying the techniques of a few expert practitioners qualitatively. We propose to study the actions of a large group of mix-engineers of varying experience, introducing quantitative methodology to investigate mix-variation and the perception of quality. This paper describes the analysis of a dataset containing 101 alternate mixes generated by human mixers as part of an on-line mix competition. A varied selection of audio signal features is obtained from each mix and subsequent principal component analysis reveals four prominent dimensions of variation: dynamics, treble, width, and bass. An ordinal logistic regression model suggests that the ranking of each mix in the competition was significantly influenced by these four dimensions. The implications for the design of intelligent music production systems are discussed.
Convention Paper 9398 (Purchase now)


Return to Paper Sessions

AES - Audio Engineering Society