Programme
Friday 13th February
Posters
P1-2: An Efficient Implementation of 3-D Audio Engine for Mobile Devices
Frederic Amadu, Jean Michel Raczinski, Arkamys, Paris, France
This paper presents a generic and customizable 3-D audio engine, which has been specially designed for gaming on low-end mobile devices. The engine is based on source and listener 3-D positioning for headphone playback. Distance attenuation, Doppler effect, and reverberation can be added to fit JSR234 specifications. In order to address platform diversity, we have developed a PC application to easily design the best 3-D audio engine in accordance with processor capabilities. Standard HRTF-based processes have been simplified to obtain a limited number of fixed-point IIR filters, which have been successfully implemented on several platforms. Then, objective and subjective validation methods allow us to certify the quality of the porting.
P1-3: Efficient Spatial Sound Synthesis for Virtual Worlds
Ville Pulkki; Mikko-Ville Laitinen, Cumhur Erkut, Helsinki University of Technology, Espoo, Finland
Directional audio coding (DirAC) is a frequency-band processing method for spatial audio based on psy- chophysical assumptions and on energetic analysis of sound ?eld. The applications of DirAC in spatial sound synthesis for virtual worlds are presented in this paper. The techniques are independent of the sound reproduction method, which can be any loudspeaker setup or headphones. It is shown that DirAC can be used to position and to control the extent of virtual sound sources, and also to generate reverberation efficiently in virtual worlds.
P1-4: Augmented Reality Audio for Location-Based Games
Mikko Peltola, Tapio Lokki, Lauri Savioja, Helsinki University of Technology, Espoo, Finland
Location-based games, such as pervasive games and geocaching, could benefit from the use of audio, in particular spatial sound. In this paper we present a binaural recording and rendering system that is capable of both including location and orientation information to audio files and playing audio content related to location. Altogether we introduce a platform for building augmented reality audio applications suitable for outdoor use. The Audiomemo application is presented to highlight the possibilities of location-based spatial sound for games.
P1-5: How Players Listen
Simon Goodwin, Codemasters Software Company, UK
The games industry currently lacks detailed understanding of the audio configurations that their listeners use to play games. Codemasters surveyed players to investigate the listening systems and configurations they had available, and those they preferred. The results of this survey have implications for the way audio assets are prepared, rendered, and mixed in games. Future consumer research is proposed in light of new platforms and audio interfaces.
P1-6: Acoustic DDR: An Automated Test Tool for 3-D Sound Perception Evaluation with Visually Impaired Users
Hector Szabo, Philippe Mabilleau, Bessam Abdulrazak, Universite de Sherbrooke, Sherbrooke, Quebec, Canada
Common off-the-shelf 5.1 acoustic systems for PC computers are nowadays in affordable price, turning attractive the production of inclusive acoustic games and simulators, oriented to bound blind and non-blind users. The use of contextual 3-D sound beacons as navigational aid allows user orientation and more sophisticated environments. However, predicting 3-D contextual sound usability and/or playability as navigational aid with COTS equipment constraints can be difficult. We present our experimental setup using COTS 5.1 acoustic and standard PC and early findings, using a 3-D acoustic "Dance Dance Revolution" game to record user response time, precision and torso/head position when presented to 3-D acoustic stimuli. User performance indicates that higher frequency broadband sounds enhances user aiming to sound virtual azimuth and exposes the need of tactile guides for her or his orientation.
P1-7: A Precise Sound Image Panning Method for Side Areas Using 5.1 Channel Audio Systems
Keita Tanno, Akira Saji, Shinya Ito, Jie Huang, University of Aizu, Aizu-Wakamatsu City, Fukushima, Japan; Wataru Hatano, Tamura Corp., Tokyo, Japan
5.1 channel home theater systems have been widely used for home audio systems and also for high reality game audio systems. We have conducted two experiments to improve the precision and clarity of sound image creation and reproduction in the side areas. The experimental setup was to change the intensity ratios between the two left loudspeakers, L and SL, and ask the listeners about the directions and clarity of the sound images. From the results, we found the traditional amplitude panning method in the side areas is not linear and asymmetrical, and the motion is almost obtained on the middle range of intensity ratios. Based on the localization curve obtained in this experiment, we can compensate the nonlinearity and asymmetry of sound panning. We also added the frequency characteristics of HRTF to the sound signals assigned to L and SL speakers by amplitude panning method. These changes of frequency characteristics can increase the reality of the sound signals to the near ear and improve the precision and clarity of sound images in the side areas.
P2-1: Genre-Specific Methodologies for Gameplay-Influenced Soundtracks
Jen Grier, New York University, New York, NY, USA
With interactive audio programming technologies, such as JSyn and JMSL, new algorithmic systems for creating interactive soundtracks that were concentric to specific video game genres and actions were developed to hone the relationship between interactive audio in game soundtracks and the player's gameplay experiences. By using specific generative methods, a unique, sonically interactive experience is possible for each player within the same game.
P3-1: Performance Analysis and Scoring of the Singing Voice
Oscar Mayor, Jordi Bonada, Pompeu Fabra University, Barcelona, Spain; Alex Loscos, Barcelona Music & Audio Technologies, Barcelona, Spain
In this paper we describe the approximation we follow to analyze the performance of a singer when singing a reference song. The idea is to rate the performance of a singer in the same way that a music tutor would do it, not only giving a score but also giving feedback about how the user has performed regarding expression, tuning, and tempo/timing characteristics. Segmentation at an intra-note level is done using an algorithm based on untrained HMMs with probabilistic models built out of a set of heuristic rules that determine regions and their probability of being expressive features. A real-time karaoke-like system is presented where a user can sing and visualize simultaneously the results of the performance.
P3-2: Developments in Phya and Vfoley, Physically Motivated Audio for Virtual Environments
Dylan Menzies, De Montfort University, Leicester, UK
Phya is an open source C++ library for incorporating physically motivated audio into virtual environments. A review is presented and recent developments, including the launch of a project to use Phya as the basis for a fully fledged virtual sound design environment, Vfoley. This will enable sound designers to rapidly produce rich foley content from within a virtual environment and extend entities for use by Phya enabled applications.