Skip to content

Journal of the Audio Engineering Society

The Journal of the Audio Engineering Society — the official publication of the AES — is the only peer-reviewed journal devoted exclusively to audio technology. Published 10 times each year, it is available to all AES members and subscribers.

 

The Journal contains state-of-the-art technical papers and engineering reports; feature articles covering timely topics; pre and post reports of AES conventions and other society activities; news from AES sections around the world; Standards and Education Committee work membership news, new products, and newsworthy developments in the field of audio.

 

If you are experiencing any issues with the E-library or the Online Journals access, please fill in this form.

2025 April - Volume 73 Number 4

Papers


This paper explores issues of equity, diversity, and inclusion in the Audio Engineering Society. It provides an overview of recent initiatives and publications on the participation of various groups in the audio industry and the Audio Engineering Society community. It also discusses the concept of justice in human research and its relevance to the recruitment of participants in audio studies. The paper analyzes the demographic data of participants in Journal of the Audio Engineering Society publications, using a corpus of 134 papers from 2022 to 2024. The results of the meta-analysis reveal that the age and gender distributions of the participants do not reflect the general population. The paper discusses the implications of these findings and concludes by calling for a discussion of best practices for increasing participant diversity in audio research.

The head mesh is a fundamental component in simulating head-related transfer functions (HRTFs). The techniques utilized for acquiring and preprocessing 3D meshes prior to calculation directly influence HRTF results. This study aims to compare the meshes obtained through different methods and analyze the impact of mesh differences on HRTFs. Three mesh capture methods based on different technical principles were employed to obtain the meshes of the human head: magnetic resonance imaging, optical scanner, and LightCage. A comparative analysis revealed that the lateral pinna parameters of the magnetic resonance imaging mesh tend to be larger than those from other methods owing to the lack of ear shape preservation, leading to significant variations in HRTF. The impact of differences in the canal and hair areas of the meshes on HRTFs was also evaluated, revealing that the canal had minimal influence on directional transfer functions of HRTFs. Moreover, bulging caused by hair did not affect localization performance. Based on these results, the study analyzed the advantages and limitations of various methods and their corresponding principles. This research serves as a reference for selecting head mesh acquisition methods and mesh preprocessing for HRTF simulations.

Electric vehicles (EVs), especially trucks, are becoming more common and present a distinct acoustic challenge compared with typical internal combustion engine vehicles. Their silent functioning poses a safety concern to pedestrians. This study seeks to provide a user-centered and psychoacoustically informed design methodology for acoustic vehicle alerting systems (AVAS), especially for electric trucks. Building on the preliminary findings of a semantic differential analysis that compared the perceived sound characteristics of internal combustion engine vehicles and EVs with and without AVAS, the study describes a design methodology. The procedure used in the study incorporates qualitative methods, such as open-ended surveys and jury tests, and quantitative methods incorporating psychoacoustical components. The emphasis is on balancing the intended “electric” and “truck-like” sound characteristics. The work describes a unique design method for user-centered and psychoacoustically informed AVAS for EVs that contributes to the expanding work to develop effective and user-friendly solutions for improving electric truck AVAS perception in urban contexts.

Engineering reports


Diffusion-Based Denoising of Historical Recordings

Authors: Miranda, Bernardo V.; Deslandes, Rafael A.; Irigaray, Ignacio; Biscainho, Luiz W. P.

In the context of audio restoration, the need to remove background noise from historical music recordings is a recurring problem, for which traditional signal processing and supervised deep learning methods have been previously applied. In this work, a generative approach that adapts conditional diffusion sampling for removing perceptually distributed noise is investigated, using the particular case of background noise removal from solo classical piano recordings as a proof of concept. The proposed method uses a set of noise examples to simulate perceptually distributed noise with specific characteristics throughout conditional diffusion sampling. Experiments with real historical 78 RPM recordings and clean recordings with added 78 RPM noise and tape hiss demonstrate that diffusion-based audio denoising performs comparably to state-of-the-art deep learning methods.

Standards and Information Documents


AES Standards Committee News

Download: PDF (82.15 KB)

Departments


Conv&Conf

Download: PDF (1.28 MB)

Extras


Table of Contents

Download: PDF (43.02 KB)

AES Officers, Committees, Offices & Journal Staff

Institutional Subscribers: If you would like to log into the E-Library using your institutional log in information, please click HERE.