AES Dublin 2019
Paper Session P15
P15 - Production and Synthesis
Friday, March 22, 13:30 — 15:30 (Meeting Room 3)
Chair:
Joseph Timoney, Maynooth University - Maynooth, Kildare, Ireland
P15-1 Investigating the Behavior of a Recursive Mutual Compression System in a Two-Track Environment—Hairul Hafizi Bin Hasnan, University of York - York, UK; Jeremy J. Wells, University of York - York, UK
Dynamic range compression is a widely used audio process. Recent trends in music production include the emergence of its use as a creative tool rather than just a corrective device. The control for this process is unidirectional, using one signal to manipulate one or many tracks. This paper examines the behavior of a bidirectional mutual compression system implemented in Max/MSP. Tests were conducted using amplitude-modulated sine waves that highlight different attributes.
Convention Paper 10182 (Purchase now)
P15-2 Turning the DAW Inside Out—Charles Holbrow, Massachusetts Institute of Technology - Cambridge, MA, USA; MIT Media Lab
“Turning the DAW Inside Out” describes a speculative, internet-enabled sound recording and music production technology. The internet changed music authorship, ownership, and distribution. We expect connected digital technologies to continue to affect the processes by which music is created and consumed. Our goal is to explore an optimistic future wherein musicians, audio engineers, software developers, and music fans all benefit from an open ecosystem of connected digital services. In the process we review a range of existing tools for internet enabled audio and audio production and consider how they can grow to support a new generation of music creation technology.
Convention Paper 10183 (Purchase now)
P15-3 Real-Time Synthesis of Sound Effects Caused by the Interaction between Two Solids—Pedro Sánchez, Queen Mary University London - London, UK; Joshua D. Reiss, Queen Mary University of London - London, UK
We present the implementation of two sound effect synthesis engines that work in a web environment. These are physically driven models that recreate the sonic behavior of friction and impact interactions. The models are integrated into an online project aimed at providing users with browser-based sound effect synthesis tools that can be controlled in real time. This is achieved thanks to a physical modelling approach and existing web tools like the Web Audio API. A modular architecture was followed, making the code versatile and easy to reuse, which encourages the development of higher-level models based on the existing ones, as well as similar models based on the same principles. The final implementations present satisfactory performance results despite some minor issues.
Convention Paper 10184 (Purchase now)
P15-4 Reproducing Bass Guitar Performances Using Descriptor Driven Synthesis—Dave Foster, Queen Mary University London - London, UK; Swing City Music Ltd - London, UK; Joshua D. Reiss, Queen Mary University of London - London, UK
Sample-based synthesis is a widely used method of synthesizing the sounds of live instrumental performances, but the control of such sampler instruments is made difficult by the number of parameters that control the output, the expertise required to set those parameters, and by the constraints of the real-time system. In this paper the principles of descriptor-driven synthesis were used to develop a pair of software tools that aid the user in the specific task of reproducing a live performance using a sampler instrument by the automatic generation of MIDI controller messages derived from analysis of the input audio. The techniques employed build on existing work and commercially available products. The output of the system is compared to manipulation by expert users. The results show that the system outperforms the human version, despite the latter taking considerably more time. Future developments of the techniques are discussed, including the application to automatic performer replication.
Convention Paper 10185 (Purchase now)