AES New York 2019
Paper Session P08
P08 - Recording, Production, and Live Sound
Thursday, October 17, 9:00 am — 11:30 am (1E11)
Chair:
Wieslaw Woszczyk, McGill University - Montreal, QC, Canada
P08-1 Microphone Comparison: Spectral Feature Mapping for Snare Drum Recording—Matthew Cheshire, Birmingham City University - Birmingham, UK; Ryan Stables, Birmingham City University - Birmingham, UK; Jason Hockman, Birmingham City University - Birmingham, UK
Microphones are known to exhibit sonic differences and microphone selection is integral in achieving desired tonal qualities of recordings. In this paper an initial multi-stimuli listening test is used to categorize microphones based on user preference when recording snare drums. A spectral modification technique is then applied to recordings made with a microphone from the least preferred category, such that they take on the frequency characteristics of recordings from the most preferred category. To assess the success of the audio transformation, a second experiment is undertaken with expert listeners to gauge pre- and post-transformation preferences. Results indicate spectral transformation dramatically improves listener preference for recordings from the least preferred category, placing them on par with those of the most preferred.
Convention Paper 10263 (Purchase now)
P08-2 An Automated Approach to the Application of Reverberation—Dave Moffat, Queen Mary University London - London, UK; Mark Sandler, Queen Mary University of London - London, UK
The field of intelligent music production has been growing over recent years. There have been several different approaches to automated reverberation. In this paper we automate the parameters of an algorithmic reverb based on analysis of the input signals. Literature is used to produce a set of rules for the application of reverberation, and these rules are then represented directly as direct audio feature. This audio feature representation is then used to control the reverberation parameters from the audio signal in real time.
Convention Paper 10264 (Purchase now)
P08-3 Subjective Graphical Representation of Microphone Arrays for Vertical Imaging and Three-Dimensional Capture of Acoustic Instruments, Part II—Bryan Martin, McGill University - Montreal, QC, Canada; Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) - Montreal, QC, Canada; Denis Martin, McGill University - Montreal, QC, Canada; CIRMMT - Montreal, QC, Canada; Richard King, McGill University - Montreal, Quebec, Canada; The Centre for Interdisciplinary Research in Music Media and Technology - Montreal, Quebec, Canada; Wieslaw Woszczyk, McGill University - Montreal, QC, Canada
This investigation employs a simple graphical method in an effort to represent the perceived spatial attributes of three microphone arrays designed to create vertical and three-dimensional audio images. Three separate arrays were investigated in this study: Coincident, M/S-XYZ, and Non-coincident/Five-point capture. Instruments of the orchestral string, woodwind, and brass sections were recorded. Test subjects were asked to represent the spatial attributes of the perceived audio image on a horizontal/vertical grid and a graduated depth grid, via a pencil drawing. Results show that the arrays exhibit a greater extent in every dimension—vertical, horizontal, and depth—compared to the monophonic image. The statistical trends show that the spatial characteristics of each array are consistent across each dimension. In the context of immersive/3D mixing and post production, a case can be made that the arrays will contribute to a more efficient and improved workflow due to the fact that they are easily optimized during mixing or post-production.
Convention Paper 10265 (Purchase now)
P08-4 Filling The Space: The Impact of Convolution Reverberation Time on Note Duration and Velocity in Duet Performance—James Weaver, Queen Mary University London - London, UK; Mathieu Barthet, Queen Mary University London - London, UK; Elaine Chew, CNRS-UMR9912/STMS (IRCAM) - Paris, France
This paper will not be presented
The impact of reverberation on musical expressivity is an area of growing interest as technology to simulate, and create, acoustic environments improves. Being able to characterize the impact of acoustic environments on musical performance is a problem of interest to acousticians, designers of virtual environments, and algorithmic composers. We analyze the impact of convolution reverberation time on note duration and note velocity, which serve as markers of musical expressivity. To improve note clarity in situations of long reverberation times, we posit musicians performing in a duo would lengthen the separation between notes (note duration) and increase loudness (note velocity) contrast. The data for this study comprises of MIDI messages extracted from performances by 2 co-located pianists playing the same piece of music 100 times across 5 different reverberation conditions. To our knowledge, this is the largest data set to date looking at piano duo performance in a range of reverberation conditions. In contrast to prior work the analysis considers both the entire performance as well as an excerpt at the opening part of the piece featuring a key structural element of the score. This analysis ?nds convolution reverberation time is found to be moderately positively correlated with mean note duration (r = 0.34 and p =< 0.001), but no significant correlation was found between convolution reverberation time and mean note velocity (r = -0.19 and p = 0.058).
Convention Paper 10266 (Purchase now)
P08-5 The Effects of Spectators on the Speech Intelligibility Performance of Sound Systems in Stadia and Other Large Venues—Peter Mapp, Peter Mapp Associates - Colchester, Essex, UK; Ross Hammond, University of Derby - Derby, Derbyshire, UK; Peter Mapp Associates - Colchester, UK
Stadiums and similar venues in the UK and throughout most of Europe are subject to strict safety standards and regulations, including the performance of their Public Address systems. The usual requirement is for the PA system to achieve a potential speech intelligibility performance of 0.50 STI, though some authorities and organizations require a higher value than this. However, a problem exists with measuring the performance of the system, as this can only be carried out in the empty stadium. The paper shows that with occupancy, the acoustic conditions change significantly, as the spectators introduce significant sound absorption and also increase the background noise level. The effect this can have on the intelligibility performance of the sound system is examined and discussed. The relationship between the unoccupied starting conditions and audience absorption and distribution are also investigated.
Convention Paper 10267 (Purchase now)