Session E Sunday, May 13 9:00 - 12:30 hr Room B Spatial Perception and ProcessingChair: Durand Begault, NASA Ames Research Center, Moffett Field, CA, USA 9:00 hr E-1 The paper contains a
description of experiments that aim to determine visual cue influence on the
perception of spatial sound. Earlier stage of the carried out experiments
showed that there exists a relationship between the perception of video
presented in the screen and sound signals reproduced in a surround system.
However, this relationship is dependent on the type of audio-visual signals.
Thus a series of subjective test has been performed on dozens of experts in
order to discover these dependencies. The main issue in such experiments is the
analysis of the influence of visual cues on the perception of the surround
sound. Conclusions concerning the complexity of the investigated problem are
included. 9:30 hr E-2 For human listeners, many
of the reflections generated inside rooms are masked by the direct signal and
other reflections. To describe such masking, a multidimensional function is
introduced, which determines the Reflection Masking Threshold (RMT). Based on
this function, a perceptual model is developed, which can evaluate the
audibility of reflections, as it is described in examples derived from
simulated rooms. 10:00 hr E-3 A
psychophysically-derived control for the perceived range of a virtual sound
source was implemented for the Pioneer Sound Field Controller (PSFC), a spatial
auditory display employing a 15-loudspeaker hemispherical array. Capable of
presenting two independent sound sources moving within a simulated reverberant
environment, the PSFC primitives include parameters to manipulate source
azimuth and elevation, and also the size and liveness of the simulated space.
As accurate control of virtual source range was confounded by variations in
both the liveness parameter and in overall PSFC channel volume, an empirical
approach was employed to derive a Look-Up Table (LUT) inverting the average
range estimates obtained from a group of human subjects who listened to a set
of virtual sources (short speech samples). 10:30 hr E-4 This paper introduces the
activities and technical steps of an interdisciplinary European project called
CARROUSO. This name stands for "creating, assessing and rendering in real
time of high quality audio-visual environments in MPEG-4 context". The key
objective of this project is to provide a novel technology that enables the
transfer of a sound field, generated at a certain real or virtual space, to
another usually remote located space. New modeling, recording, encoding,
decoding and rendering techniques, which support and implement this technology
will be discussed. 11:00 hr E-5 This paper examines how various aspects of the physical
characteristics of the human head and torso affect directional loudness characteristics.
Modeled directional characteristics are presented based upon the head related
transfer functions (HRTF) of a number of individuals in conjunction with the
Moore loudness model. Data is presented in the frontal, horizontal and median
planes. Variations between individuals are explored as are the difference
between near and far field HRTF's. The contributions of the pinna, head and
torso are examined separately. 11:30 hr E-6 The spatial rendering of
sound in Virtual Reality systems can quickly become a computationally expensive
process. The author proposes a Spatial Sound rendering system that allows for
the graceful degradation of spatial quality based upon scaling parameters. The
parameters are a combination of both physical and perceptual attributes. The
Scalable Spatial Sound Rendering system is divided into three User-Profiles;
Professional, Prosumer and Consumer, where each profile is composed of a number
of varying levels of quality. Typical applications for this scalable framework
include Mobile-VR systems and Personal VR systems based upon standard
multimedia PCs. One of the main advantages of this scalable architecture is
that the audio content is only created once and is appropriately scaled for the
end user - write once read many. 12:00 hr E-7 A new evaluation framework for virtual acoustic
environments (VAE) is introduced. The framework is based on the comparison of
real-head recordings with physics-based room acoustic modeling and
auralization. The real-head recording procedure and VAE creation method are
discussed and new signal processing structures for auralization are introduced.
As a case study, recordings were made in a classroom which was also modeled and
auralized.
|
|