Events

AES 61st Conference: Schedule

Audio for Games

 

Home | Authors | Schedule | Papers | Venue | Committee | Sponsors | Registration Info | Wrap-Up

 

Wednesday 10th February

Fish
Science
09:00 - 10:00
Registration / Coffee
10:00 - 11:15   Tales of Audio from the Third Dimension!
11:15 - 11:45
Break
11:45 - 12:45   Timbre FX for the Sound Designer
12:45 - 14:15
Lunch
14:15 - 15:15   Immersive, filmic horror - the sound of Until Dawn
15:15 - 15:45
Break
15:45 - 17:30   Virtual Reality Spatial Audio
17:30 - 20:00
Dolby Reception (offsite)

Thursday 11th February

Fish
Science
Council
08:45 - 09:15
Registration / Coffee
 
09:50 - 10:50   The Boy from INSIDE Fabric Demos
10:50 - 11:00
Break
11:00 - 12:00   Digital Foley
12:00 - 12:10
Break
12:10 - 13:40   Keynote  
13:40 - 14:45
Lunch
Fabric Demos
14:45 - 15:45   Creature Sound Design Papers 1: Spatial Audio Rendering
15:45 - 16:15
Break
 
16:15 - 17:15   Music for Virtual Reality Applications Papers 2: Audio Content and Serious Games
17:15 - Late
Conference Social Event (Central London Pub)

Friday 12th February

Fish
Science
Council
09:00 - 09:30
Registration / Coffee
09:30 - 11:00   Environmental Audio Effects in VR and AR  
11:00 - 11:30
Break
11:30 - 12:30   8 Into 1 Won't Go - The Perils of Fold-Down Papers 3: Binaural Sound for VR
12:30 - 14:00
Lunch
14:00 - 15:30   Changing the Way Stories are Told & Games Will be Played Papers 4: Synthesis and Sound Design
15:30 - 16:00
Closing Remarks
 
Wednesday 10th February
Tales of Audio from the Third Dimension!
Scott Selfon, Microsoft
High fidelity and non-repetitive sound effects, music, dialogue, and ambience are only the beginning of a compelling in-game sound experience. Spatialization (hearing sounds from where they are perceived to occur) is increasingly critical for traditional gameplay and virtual/augmented/mixed reality experiences alike. This talk surveys real-time 3D sound manipulation techniques in use today and on the horizon: dynamic simulation of position, distance, interaction with game geometry, environmental reverberation, and more. WeÍll offer a primer on topics both technical (HRTFs and other processing; spatial formats; middleware integration) and creative (placing non-diegetic audio in a mixed reality game; mixing techniques; evolving best practices for spatial implementations for headphone and speaker solutions).
Timbre FX for the Sound Designer
Alex Case, University of Massachusetts Lowell
One of the most important properties of sound, timbre has no dedicated signal processor. Instead, we shape timbre through several different effects. Equalization offers a direct method for limited timbral adjustment. For greater tonal flexibility, mastery of other effects is required. This tutorial details the techniques for leveraging the timbre-redefining capabilities of compression, delay, reverb, and distortion processors. It makes clear the connections between all of these effects and timbre, and describes what to listen for as relevant parameters are adjusted. The coordinated use of all of these effects for timbre from slight modifications to a complete re-synthesis maximizes sound design freedom while minimizing resource consumption.
Immersive, filmic horror - the sound of Until Dawn
Barney Pratt, Audio Director, Supermassive Games
Until Dawn was always going to have a 'big' soundtrack due the impressive setting and the twists and turns of the narrative. Placing the player into the directors chair, allowing them to play their film while immersed in the action, always the voyeur.

Until Dawn’s Audio Director Barney Pratt takes a more in-depth look at some of the creative, technical and philosophical approaches that went into making the sound of the game. From seamless immersion and reworking panner plugins for improved voyeurism to improved emotional nuance through more traditional film editing techniques.
Virtual Reality Spatial Audio
Gavin Kearney, AudioLab, Department of Electronics, University of York, Marcin Gorzel and Alper Gungormusler, Google Inc, Pedro Corvo, Playstation VR, Sony Computer Entertainment Europe Ltd., Jelle Van Mourik, Playstation VR, Sony Computer Entertainment Europe Ltd. and Varun Nair, Two Big Ears Ltd.
In recent years, major advances in gaming technologies, such as cost-effective head-tracking and immersive visual headsets have paved the way for commercially viable virtual reality to be delivered to the individual. Now the consumer finally has the opportunity to experience new gaming, cinematic and social media experiences with truly immersive and interactive 3-D audio and video content.

For many sound designers, rendering a truly dynamic and spatially coherent mix for VR presents a new learning curve in soundtrack production. What spatial audio techniques should we be using to create engaging and interactive 3-D mixes? What audio workflows should we employ for similar immersive experiences on headphones, 5.1 loudspeakers and beyond? Are new VR production methods backwards compatible with existing game audio pipelines? Can binaural reproduction work for everyone?

In this workshop our panel of experts will present practical workflows for mixing and rendering 3-D sound for VR. The workshop will explore different production techniques for creating immersive mixes such as Ambisonics processing and Head-Related Transfer Function rendering. It will also explore the importance of environmental rendering for VR as well as outlining workflow challenges and pipelines for dynamic spatial audio over a variety of VR technologies and applications.
Thursday 11th February
The Boy from INSIDE: Uncompromising Character Audio Implementation
Jakob Schmid, Playdead
A 5-year collaboration between sound designer Martin Stig Andersen and programmer Jakob Schmid on INSIDE, Playdead's follow-up to award-winning game LIMBO, has led to an uncompromising audio implementation, unique in its design choices and level of detail. This talk focuses on the design and implementation of foley and voice for the main character of INSIDE. It will be explained how game state and character geometry is analyzed to provide data for audio systems. A method for context-dependent sound selection for footsteps is described, along with the implementation of a breath sequencer that reacts to player input and animation and matches rhythmic breathing to footsteps. Finally, a selection of tools used to configure and debug audio will be described.
Digital Foley: Leveraging Human Gesture in Game Audio
Christian Heinrichs and Andrew McPherson, Queen Mary University of London
One of the greatest challenges facing game audio over the next few years is the incorporation of nuanced continuous interaction between the player's movements and the virtual environment. Designing sound for game objects must progress from an exercise in dubbing to crafting the kinds of rich action-sound relationships one might find in a musical instrument. Procedural audio has some answers to this problem, but both industry and research focus on realism and efficiency while ignoring contextual aesthetics and behaviour in the design process. Physically-based sound engines that match the properties of an object are a step in the right direction, but often don't match the expressivity of sound performed by a Foley artist.
This talk proposes human gesture as a fundamental tool in the design of next-generation procedural game audio. Aside from nuanced interaction becoming a more pronounced part of the gameplay experience, it can also be employed in all stages of the sound design process itself:
1. Familiarising oneself with a sound model by means of casual gestural exploration
2. Designing its behaviour in a game context
3. Integrating the model into a game using recorded gestures
We present FoleyDesigner, a software prototype of a six-month collaborative project between Queen Mary's Centre for Digital Music and Enzien Audio. The project explores such a workflow with the aid of Bela, a new low-latency embedded audio platform, and Heavy, a highly optimised compiler for audio programming languages.
Keynote
Martin Stig Andersen, Composer and Sound Designer
Martin Stig Andersen will be guiding the audience through the unique musical journey that led him to create the multiple award winning audio for the video game LIMBO. Showcasing an eclectic mix of examples from a decade of work in electroacoustic music and audiovisual arts, Andersen will reflect on how the dynamic relationship between sound and image can be explored as a means to reveal new stories and emotional experiences. The session will illustrate how Andersen's preoccupation with audiovisual synergy and ambiguity eventually caused him to blur the dividing line between music and sound design, an approach that found special relevance in LIMBO but also in Playdead's next title INSIDE.
Creature Sound Design
Orfeas Boteas and Matthew Collings, Krotos Ltd
Designing sound for custom creatures for film, TV and games can be a time-consuming and costly procedure. We will discuss traditional methods of going about this process, as well as more contemporary solutions, including real-time processing. We'll also discuss treating this aspect of sound design as a performance, focusing on the expressivity and subtle nuances that can be achieved by taking this approach.
Music for Virtual Reality Applications
Joe Thwaites, Sony Computer Entertainment Europe
Virtual reality (VR) provides new ways of presenting sound and music to the user, and this has implications for composition and sound creation. After working with the Playstation VR Headset for two years, we have begun to collate guidelines for creating music for VR applications. This talk will discuss new possibilities offered by the VR medium, as well as revisit traditional approaches, considering among other things: content creation, implementation, and mixing. As consumer VR products are still in an infancy state, this talk aims at sharing gathered insights, exposing misconceptions, and offering practical advice for composers seeking to design music for VR applications.
Friday 12th February
Environmental Audio Effects in VR and AR
Chair: Jean-Marc Jot, DTS. Panelists: Lakulish Antani, Impulsonic, Scott Selfon, Microsoft, Simon Ashby, AudioKinetic, Simon Gumbleton, SCEE
The emergence of commercial VR and AR hardware and applications has placed renewed emphasis on the fidelity and performance of positional and environmental audio rendering engines and creation tools for games. In order to support the expected sense of immersion and suspension of disbelief, the reproduced audio scene must be spatially and acoustically congruent with the first-person visual presentation, and accurately respond to the player’s head rotation and navigation movements. In addition to the oft-discussed technological challenges associated with HRTF-based binaural rendering technology, these applications highlight the need for plausible reproduction of the natural effects of acoustic source directivity and orientation, obstacles, reflections and reverberation. Following the initial adoption, in the late 1990s, of basic parametric effect models for “sound cones”, “obstruction”, “occlusion” and “environment reverbs”, continued research in physics-based acoustic environment simulation methods has led recently to commercial solutions designed for integration in game audio rendering engines and tools. In this session, application developers, sound designers, technologists and researchers will review and discuss the principles, practical implementation and ongoing challenges of environmental audio effects in VR or AR applications.
8 Into 1 Won't Go - The Perils of Fold-Down
Simon Goodwin, DTS
It has become traditional for non-interactive audio to be 'downmixed' from multi-channel formats into fewer channels by taking a proportion of each input and deriving the outputs by summation of the scaled inputs. This works tolerably a lot of the time, and passive media creators and manipulators in cinema and broadcast have worked out rules of thumb to avoid obvious errors - or got used to them. It's far more problematic for interactive media. This practical paper explains in words and diagrams (and equipment-permitting maybe some surround and stereo samples) how stereo, 5.1 and 7.1 media has traditionally been folded down, some of the problems this has caused in game implementations for PCs and consoles including the very newest, how the approaches necessarily vary between game genres, how to work around them and how to avoid the problem entirely - for pre-rendered and adaptively-created content. This paper usefully contrasts old channel and new object-orientated audio systems and shows how difficult it can be to make sure that a mix for one configuration has the same component balance, especially when both 3D and pre-rendered content must be considered - and how much that is needed as listener configurations proliferate.
Changing the Way Stories are Told & Games Will be Played
 
Chanel Summers, Syndicate 17 & University of Southern California
Over the next few years, augmented reality games - where the game world is overlaid on top of the real world - are going to become more and more prevalent. Sound professionals working on these kinds of games face unique challenges since they must not only create great audio, but also audio that blends skillfully with the real world. This session will provide specific audio techniques that can be used to advance story and gameplay in augmented games and will illustrate these techniques with real world examples from "Leviathan: The Evolution of Storytelling", a groundbreaking product featured in Intel's 2014 Consumer Electronics Show keynote. It will also explore how the Leviathan team brought this rich world to life through audio, including lessons learned while creating an audio design for such an ambitious and large-scale project within an extremely tight schedule.

 

AES - Audio Engineering Society