W1: The Real World of Immersive Surround Production Techniques in JAPAN
Presenter(s) / Panel(s)
Mick Sawaguchi (UNAMAS-Label) and Hideo Irimajiri (WOWOW)
Abstract
In this workshop, we will present a real world of Immersive Surround Production Practices by music album and programs over 9.1 ch. We will be in charge of 75 minutes workshop with slides and playback that make it useful for participants.
From Mick Sawaguchi 4 time award wining producer engineer, his UNAMAS Label has been doing immersive sound productions from 2014 on 9.1 ch to 11.1 ch classical music production at Ohga Hall Karuizawa.
He introduces this recording concept by art, technology, and engineering that show you immersive surround miking and various heights miking to optimized music style and delivered format.
From Hideo Irimajiri, he introduces various program productions from 22.2ch–9.1ch on UHD-TV. Also, he introduces his w-decca tree and various heights miking along with different event or venue.
W2: Techniques for recording and mixing pop, rock, and jazz music for 22.2 Multichannel Sound
Presenter(s) / Panel(s)
Will Howie (McGill University)
Abstract
This workshop will present several newly developed techniques and concepts for music recording and mixing developed for 22.2 Multichannel Sound. These techniques can be easily scaled to 3D audio formats with a reduced number of playback channels, or adapted to object-based workflows. Complex multi-microphone arrays designed for capturing highly realistic direct sound images are combined with spaced ambience arrays to reproduce a complete sound scene. Challenges and strategies for mixing music for 3D reproduction from traditional stereo multitracks will also be discussed. Numerous corresponding 3D audio examples will be played.
W3: Microphone arrangement comparison of orchestral instrument for recording producer, balance engineer education
Presenter(s) / Panel(s)
Thorsten Weigelt (Berlin University of the Arts) and Kazuya Nagae (Nagoya University of Arts).
Abstract
Every musical instrument has a specific sound radiation pattern, which is highly dependent from tone and frequency. This has been extensively investigated by Jurgen Meyer in his book “Acoustics and the Performance of Music”. Due to this, the recorded sound by a microphone is highly dependent from the exact placement in relation to the instrument. This is a fact every sound engineer producer has to be aware of and knows. The recorded sound changes with the placement of the microphone quite drastically and we have to choose the “best” or (better) most appropriate mic placement for a specific recording situation.
We wanted to produce sound recordings for 15 orchestra instruments to make these differences hearable for interested people working and studying in the field of Recording Arts. We hope that those examples can supplement the worthwhile and irreplaceable book by Jurgen Meyer. We wanted to record those samples in a realistic environment and situation, means as stereo recordings in a concert hall. This AES conference's topic is Spatial Audio. But where and what kind of sound comes out from the instrument is most fundamental and important for music recording. We can know the balance between direct and indirect sounds, and musical and not musical sounds. We think, it will make to help for it. http://soundmedia.jp/nuaudk/
W4: Microphone Techniques for 3D sound recording
Chair(s)
Hyunkook Lee (University of Huddersfield) and Kimio Hamasaki (ARTSRIDGE LLC)
Presenter(s) / Panel(s)
Helmut Wittek (SCHOEPS Mikrofone GmbH), Will Howie (McGill University), Thorsten Weigelt (Berlin University of the Arts), Florian Camerer (ORF), Toru Kamekawa (Tokyo University of Arts), Kimio Hamasaki (ARTSRIDGE LLC), Hyunkook Lee (University of Huddersfield)
Abstract
Over the last few years there have been various microphone techniques proposed for 3D sound recording in an acoustic environment. Although they commonly tend to add extra height microphones to an existing horizontal surround microphone array, the polar pattern and angular orientation for the height microphones and the spacing between the lower and upper microphone layers are diverse. In a broad sense, there are mainly two schools of techniques used for 3D music recording: main arrays only with omni-directional microphones and those that employ omni or directional main microphones with directional height microphones. In order to provide a better understanding about the potential merits and limitations of different approaches, this workshop will invite leading recording engineers and researchers in the field of 3D audio for a panel discussion, with an aim to actively engage participants in the discussion. Each panel member will first share his or her own philosophy and technique for 3D sound capture with some potential demos before having the panel discussion. The workshop will cover some of the important quality aspects for 3D sound recording, such as localisation accuracy, spaciousness, timbral quality, musical intention, and the impression of ‘being there’, and discuss how different techniques can help achieve those qualities in recording.
W5: Upmix and downmix technique for 3D sound recording and reproduction
Chair(s)
Toru Kamekawa (Tokyo University of the Arts)
Presenter(s) / Panel(s)
Yo Sasaki (NHK STRL), Rafael Kassier (HARMAN Lifestyle Division)
Abstract
Several formats that include height channels have been proposed, such as NHK's 22.2 channel system and the Auro 3D system which are included in ITU-R BS.2159 standardized in 2012. In regard to these playback systems labeled 3D Audio, upmix and downmix techniques are important to disseminate these playback system for public. In this workshop, we compere several upmix and downmix techniques and discuss how to make compatibility with simpler conventional playback methods.
W6: Object-Based Audio workflow for spatial broadcast productions — using Audio Definition Model for mastering
Presenter(s) / Panel(s)
Matthieu Parmentier (France TV)
Abstract
This workshop will talk about broadcasters issues to deliver spatial audio productions on various distribution networks. Focusing on the Object-Based Audio workflow to maintain a good quality/cost ratio, the discussions and demos will highlight some new methodologies and tools recently engineered to create and monitor master files thanks to the Audio Definition Model, a free open format.
W7: Live spatial and Object-Based Audio production
Presenter(s) / Panel(s)
Matthieu Parmentier (France TV)
Abstract
This EBU Audio Systems project team envisions to produce an experimental live production, using Object-Based Audio to feed different spatial audio formats in parallel, like Ultra High Definition TV and VR 360-degrees. The upcoming European Athletics Championship in Berlin (August 2018) is candidate to welcome this live trial. The talk will present the audio production workflow in detail, underlined by a live or near-live demo.
W8: Strategies for Controlling and Composing for the Cube Spatial Audio Renderer
Presenter(s) / Panel(s)
Charles Nichols (Institute for Creativity, Arts, and Technology, Virginia Tech)
Abstract
In the Moss Arts Center at Virginia Tech, the Institute for Creativity Arts and Technology (ICAT) has designed and built the Cube a multimedia research lab and presentation venue, incorporating an 134.6 speaker immersive spatial-audio system, 9 directional narrow beam speakers, a 4 projector 360ยบ surround video projection system with 3D capabilities, a 24 camera motion-capture system, and a tetherless virtual reality system with head mounted displays and backpack computers for up to 4 simultaneous users. As a faculty affiliate of ICAT, Assistant Professor of Composition and Creative Technologies Charles Nichols has helped research and design the audio system in the Cube, and has composed and performed several pieces utilizing the audio, video, and motion capture systems. ICAT Media Engineer Tanner Upthegrove has helped research and design all of the multimedia systems in the Cube, and has composed his own music for the spatial audio system. For the workshop, Nichols will present ways that he and Upthegrove have controlled immersive spatial audio with commercial and custom software and hardware in the multimedia systems of the Cube at Virginia Tech. During the presentation, Nichols will perform his compositions What Bends and Anselmo, for electric violin, computer music, and processed video, and present from fixed media his compositions Beyond the Dark and Shakespeare's Garden, for computer music and video of installation art, along with rabies, for computer generated electrometer band, by Upthegrove, in the 5.1.4-channel spatial audio system of the 100th Anniversary Hall at Tokyo Denki University.
W9: The Present of Spatial Audio Expression in VR Games
Presenter(s) / Panel(s)
Atsushi OHTA (BANDAI NAMCO Studios Inc.)
Abstract
We will describe our current situations, challenges and future of spatial audio expression in video games at this workshop. We have continued to challenge to as high-realistic sound field as we've ever experienced through developing VR games and attractions, and then we have produced “Summer Lesson” and “VR ZONE.” This time we want to talk about barriers in development of video games, know-hows of these VR titles, and our thought of the future of game audio expression.
W10: Creating Sound in Virtual 3-D Space —A Comparison of 3-D Audio Production—
In this workshop, we'll tackle the “Aesthetics and Science” of game audio. Game sound designers use intuition and rules of thumb to design in-game audio, but is there an acoustical basis for their work? Our academic experts will unravel the mystery. We'll also examine post-production audio approaches and designs that rely heavily on intuition and rules of thumb. How do they compare in terms of production limitations and interactive experience? Since games are a product of programming, hardware processing power determines acoustic implementation, which is then refined by the artist. In turn, users experience the results through interaction, making games a unique type of media. “Aesthetics and Science” are sure to be of even greater importance to game audio in the coming future; it's time for us to take a closer look.
Multi-Grammy winner producer and engineer in surround productions Jim Anderson and Ulrike Schwarz (Anderson Audio NewYork) have spent the past year recording and mixing music in high resolution and in immersive formats from venues in New York to Norway to Havana. Their recordings have been made in various 3D recording formats and feature solo piano, big band, jazz trio and quartet, and orchestral performances. Mixing has taken place at Skywalker Sound and mastering has been by Bob Ludwig and Darcy Proper. Recordings will highlight performances by Jane Ira Bloom, Gonzalo Rubalcaba, the Jazz Ambassadors, and Norway's Stavanger Symphony Orchestra. Moderator Kimio Hamasaki will host an in-depth conversation with the two producers as they recount their experiences of recording in immersive formats.
T2: Psychoacoustics of 3D sound recording and reproduction
Presenter(s) / Panel(s)
Hyunkook Lee (Applied Psychoacoustics Lab, University of Huddersfield)
Abstract
3D surround audio formats aim to produce an immersive soundfield in reproduction utilising elevated loudspeakers. In order to use the added height channels most optimally in sound recording and reproduction, it is necessary to understand the psychoacoustic principles of vertical stereo perception. This tutorial/demo session aims to provide a comprehensive overview of important psychoacoustic principles that recording engineers and spatial audio researchers need to consider when recording or rendering 3D sound. The topics will include real and phantom image localisation mechanisms in the vertical plane, vertical interchannel crosstalk, vertical interchannel decorrelation, phantom image elevation effect, perceptual equalisation for height enhancement and the practical applications of the research findings in 3D microphone array design. Various recording techniques for 3D sound capture and perceptual signal processing techniques to enhance the 3D image will also be introduced. This will be accompanied with demos of various 3D recordings, including the recent Dolby Atmos and Auro-3D blu-ray release of the Siglo de Oro choir.
T3: Ambisonic Recording and Mixing for Live Music Performance in 3D space
Since VR/AR industry grows rapidly, Ambisonics has been attracted attention again as a main audio format to deliver 360-degree audio for 360-degree streaming video such as YouTube/Facebook. However, Ambisonics is demanding technology for creativity. This workshop will present how to utilize the beauty of sound field recording and mixing for 360-degree Music Video. Starting from re-cap of ambisonic basis, give you some idea of creative procedure for any kind of 360-degree video format.
This workshop will focus on productive and creative ambisonic usage for music, showing ambisonic recording and mixing pipeline by using materials of my ambisonic music video project. Few mathematical understanding may required to understand the creative idea of ambisonic technology. But workshop will go as less as technological term. Practical example will be showed such as ambisonic mic and equipments selection, preparation for ambisonic music recording, mixing tools and techniques.
T4: Kraftwerk and Booka Shade — The Challenge to Create Electro Pop Music in Immersive / 3D audio
Presenter(s) / Panel(s)
Tom Ammermann (New Audio Technology)
Abstract
Music has not a cinematic approach where spaceships flying around the listener. Nonetheless, music can become a fantastic spatial listening adventure in immersive / 3D. How this sounds will be shown with the new Grammy awarded Kraftwerk and Booka Shade Blu-ray releases this year. Production philosophies, strategies and workflows to create immersive / 3D in current workflows and DAWs will be shown and explained.
T5: Acoustic enhancement system: Lessons on spatial hearing from concert hall designs
Chair
Sungyoung Kim (Rochester Institute of Technology)
Presenter(s) / Panel(s)
Hideo Miyazaki and Takayuki Watanabe (Yamaha Corporation), Suyoung Lee (SoundKoreaENG Corporation)
Abstract
This tutorial provides a comprehensive understanding of spatial hearing due to natural concert hall acoustics compared with a modern acoustic enhancement system. An acoustic enhancement system can alter the original or natural acoustic characteristics of a space using electro-acoustic devices generating a new immersive acoustic environment. Concert hall acoustics have a rich history and this tutorial provides important lessons with regard to spatial hearing. This tutorial will also discuss the connection of acoustics to the latest audio technology as an invaluable asset for today's researchers in the field of spatial sound capture and manipulation. The panelists will introduce the history of acoustic enhancement systems and discuss the latest developments of these systems related to spatial impression and sound quality optimization. The second part of tutorial will take place in a concert hall to demonstrate the manipulation of spatial attributes and their impact on musicians and listeners.
T5 has on-site demonstration at the following location:
Title: On-site demonstration of Acoustic enhancement system — Lessons on spatial hearing from concert hall designs
Venue: Yamaha Ginza Hall & Studio (9-14, 7 Chome, Ginza, Chūō-ku, Tōkyō-to 104-0061, Japan) (Google Maps)
Time: 17:00–18:00, August 9th
Information:
1. Limitation of the participant number is 30.
2. You can get the ticket at the registration.
3. At the Ginza hall, we will only check the ticket (not conference badge).