AES Budapest 2012
Poster Session P18
P18 - Education and Human Factors; Applications
Saturday, April 28, 13:00 — 14:30 (Room: Foyer)
P18-1 Optimizing Teaching Room Acoustics: A Comparison of Minor Structural Modifications to Dereverberation Based on Smoothed Responses—Panagiotis Hatziantoniou, University of Patras - Patras, Greece; Nicolas-Alexander Tatlas, Stelios Potirakis, Technological Education Institute of Piraeus - Aigaleo-Athens, Greece
In this work a comparison between traditional acoustical treatment such as building material substitution and digital room acoustics dereverberation is presented for teaching rooms. Measured responses for a number of listening positions for two rooms are shown as well as relevant parameters namely T30, EDT, C-50, and STI. Corresponding values are calculated by employing a simulation model in order to verify its accuracy; minor changes are then introduced to the model aiming to improve speech intelligibility and a new set of parameters is obtained. Finally, dereverberation achieved from inversion of appropriately modified measured room responses based on Complex Smoothing is employed and the acoustical parameters from the filtered impulse responses are derived.
Convention Paper 8658 (Purchase now)
P18-2 Designing an Audio Engineer's User Interface for Microphone Arrays—Stefan Weigand, Technicolor, Research and Innovation - Hannover, Germany, University of Applied Science (HAW), Hamburg, Germany; Thomas Görne, University of Applied Science (HAW) - Hamburg, Germany; Johann-Markus Batke, Technicolor, Research and Innovation - Hannover, Germany
Microphone arrays are rarely used in artistic recordings, despite the benefits they offer. We think this is due to a lack of user interfaces considering audio engineers' needs, enabling them to address the features in an easy and suited way. This paper contributes to solving this problem by outlining guidelines for such interfaces. A graphical user interface (GUI) for microphone arrays employing Higher Order Ambisonics (HOA), incorporating the audio engineers' aims and expectations, has been developed by analyzing their common activities. The presented solution offers three operation modes, covering the most frequent tasks in (professional) audio productions, thus making it more likely to engage audio engineers in using microphone arrays in the future.
Convention Paper 8659 (Purchase now)
P18-3 User Interface Evaluation for Discrete Sound Placement in Film and TV Post-Production—Braham Hughes, Jonathan Wakefield, University of Huddersfield - Huddersfield, UK
This paper describes initial experiments to evaluate the effectiveness of different 3-D input interfaces combined with visual feedback for discrete sound placement in film and TV post-production. The experiments required the user to control the 3-D position of a sound object for a moving target object within a video clip on screen using a range of physical interfaces with and without visual feedback. Inclusion of visual feedback had a statistically significant impact on the accuracy of the tracking of the target object. The Wii remote controller appeared to perform the best in the tests and in the user preference ranking. The traditional desk-based input method performed worst in all tests and the user preference ranking.
Convention Paper 8660 (Purchase now)
P18-4 KnuckleTap—Exploring the Possibilities of Audio Input in a Mobile Rhythmic Notepad Application—Julian Rubisch, breakingwav.es - Vienna, Austria; Michael Jaksche, University of Applied Sciences - St. Poelten, Austria
Apart from some significant contributions in the scientific community and a few notable product innovations, audio input for musical interaction purposes on mobile devices has until now been widely neglected as a control parameter. This situation does not correspond to the fact that musical ideas are often vocalized and developed by musicians using their voice or other sounding objects, nor does it take advantage of contemporary mobile devices' most rigid and reliable sensor—the microphone. As an example case exploiting these possibilities, we conceived a notepad application to track rhythmic ideas by recording taps on a surface with a smartphone's built-in microphone, refined by subsequent detection of onsets, clustering, instantaneous rearranging of the detected events, and export capabilities.
Convention Paper 8661 (Purchase now)
P18-5 An Optical System to Track Azimuth Head Rotations for Use in Binaural Listening Tests of Automotive Audio Systems—Anthony Price, Bang & Olufsen a/s - Struer, Denmark, presently at University of Surrey, Guildford, Surrey, UK
Binaural technology is used to capture elements of an automotive audio system and reproduce them over headphones. This requires the tracking of azimuth head rotations of listening test participants in order to assist source localization. The parameters and faults of the currently implemented system are discussed and a new method of tracking azimuth head rotations is described. The system is tested, implemented, and found to have an error within 0.26 degrees. The potential for its further development, and the development of the field, is discussed.
Convention Paper 8662 (Purchase now)
P18-6 Investigation of Salient Audio-Features for Pattern-Based Semantic Content Analysis of Radio Productions—Rigas Kotsakis, George Kalliris, Charalampos Dimoulas, Aristotle University of Thessaloniki - Thessaloniki, Greece
The paper focuses on the investigation of salient audio features for pattern-based semantic analysis of radio programs. Most “news and music” radio programs have many structure similarities with respect to the appearance of different content types. Speech and music are continuously interchanged and overlapped, whereas specific speakers and voice patterns are more important to recognize. Recent research showed that various taxonomies and hierarchical classification schemes can be effectively deployed in combination with supervised and unsupervised training for semantic audio content analysis. Undoubtedly, audio feature extraction and selection is very important for the success of the finally trained expert system. The current paper employs feature ranking algorithms, investigating audio features saliency in various classification taxonomies of radio production content.
Convention Paper 8663 (Purchase now)
P18-7 Listeners Who Have Low Hearing Thresholds Do Not Perform Better in Difficult Listening Tasks—Piotr Kleczkowski, Marek Pluta, Paulina Macura, Elzbieta Paczkowska, AGH University of Science and Technology - Krakow, Poland
The relationship between measures of hearing acuity and performance in listening tasks for normally hearing subjects has not found a solid evidence. In this work six one-parameter measures of hearing acuity, based on audiograms, were used to investigate whether a relationship between those measures and listeners’ performance existed. The quantifiable results of several listening tests were analyzed, using speech and non-speech stimuli. The results showed no correlation between hearing acuity and performance thus demonstrating that hearing acuity should not be a critical factor in the choice of listeners.
Convention Paper 8641 (Purchase now)