Thursday, September 29, 2:15 pm — 3:45 pm (Rm 409B)
Chair:
Hyunkook Lee, University of Huddersfield - Huddersfield, UK
P4-1 A Three-Dimensional Orchestral Music Recording Technique, Optimized for 22.2 Multichannel Sound—Will Howie, McGill University - Montreal, QC, Canada; Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) - Montreal, Quebec, Canada; Richard King, McGill University - Montreal, Quebec, Canada; The Centre for Interdisciplinary Research in Music Media and Technology - Montreal, Quebec, Canada; Denis Martin, McGill University - Montreal, QC, Canada; CIRMMT - Montreal, QC, Canada
Based on results from previous research, as well as a new series of experimental recordings, a technique for three-dimensional orchestral music recording is introduced. This technique has been optimized for 22.2 Multichannel Sound, a playback format ideal for orchestral music reproduction. A novel component of the recording technique is the use of dedicated microphones for the bottom channels, which vertically extend and anchor the sonic image of the orchestra. Within the context of highly dynamic orchestral music, an ABX listening test confirmed that subjects could successfully differentiate between playback conditions with and without bottom channels.
Convention Paper 9612 (Purchase now)
P4-2 Subjective Graphical Representation of Microphone Arrays for Vertical Imaging and Three-Dimensional Capture of Acoustic Instruments, Part I—Bryan Martin, McGill University - Montreal, QC, Canada; Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) - Montreal, QC, Canada; Richard King, McGill University - Montreal, Quebec, Canada; The Centre for Interdisciplinary Research in Music Media and Technology - Montreal, Quebec, Canada; Wieslaw Woszczyk, McGill University - Montreal, QC, Canada
This investigation employs a simple graphical method in an effort to represent the perceived spatial attributes of three microphone arrays designed to create vertical and three-dimensional audio images. Three separate arrays were investigated in this study: 1. Coincident, 2. M/S-XYZ, and 3. Non-coincident. Instruments of the orchestral string, woodwind, and brass sections were recorded. Test subjects were asked to represent the spatial attributes of the perceived audio image on a horizontal/vertical grid via a pencil drawing. It can be seen in the subjects’ representations that these techniques clearly capture much more information than a single microphone and exhibit vertical as well as horizontal aspects of the audio image.
Convention Paper 9613 (Purchase now)
P4-3 Grateful Live: Mixing Multiple Recordings of a Dead Performance into an Immersive Experience—Thomas Wilmering, Queen Mary University of London - London, UK; Centre for Digital Music (C4DM); Florian Thalmann, Queen Mary University of London - London, UK; Mark B. Sandler, Queen Mary University of London - London, UK
Recordings of historical live music performances often exist in several versions, either recorded from the mixing desk, on stage, or by audience members. These recordings highlight different aspects of the performance, but they also typically vary in recording quality, playback speed, and segmentation. We present a system that automatically aligns and clusters live music recordings based on various audio characteristics and editorial metadata. The system creates an immersive virtual space that can be imported into a multichannel web or mobile application allowing listeners to navigate the space using interface controls or mobile device sensors. We evaluate our system with recordings of different lineages from the Live Music Archive’s Grateful Dead collection.
Convention Paper 9614 (Purchase now)