Location: Zoom virtual Meeting (5pm Central European Time / Germany time zone)
Moderated by: Dr. Elena Shabalina
Speaker(s): Dr. Hyunkook Lee, Philipp Strobel
For this month’s AES South German Section Research Colloquium, two experts in the field of audio measurements and pychoacoustic perception will discuss their experiences regarding commonly used methods of listening tests, their evaluation and the analysis of measurements, with full interaction with the audience. Your questions are welcome!
The Experts
Dr. Hyunkook Lee
Dr. Hyunkook Lee is Associate Professor in Music Technology at the University of Huddersfield, UK. He heads the Applied Psychoacoustics Lab (APL) at Huddersfield, leading research in various areas of 3D audio (perception of height, recording, reproduction, virtual acoustics, 6DoF AR/VR audio, etc.). Before joining Huddersfield in 2010, he was a senior research engineer in audio R&D at LG Electronics, South Korea. He received a PhD degree in sound recording and psychoacoustic engineering and a bachelor's in music and sound recording (Tonmeister) from the University of Surrey in 2006 and 2002, respectively. He is a fellow of the AES, vice-chair of the AES high resolution audio technical committee and associate technical editor for Journal of the AES.
His talk will overview some of the classic psychometric listening test methods for detection/discrimination tasks and discuss how an ABX test could be conducted more rigorously following the threshold and signal detection theories, compared with conventional "self-switching" method. Potential influences of testing method on bias will also be discussed.
Philipp Strobel
After completing his master's degree in communications engineering, Philipp Strobel worked as a development engineer for RF hardware at Rohde&Schwarz GmbH & Co. KG. Since 2004, he applies his experience as a product manager at Rohde&Schwarz. He has been working in the field of digital signal processing for crosstalk cancellation, loudspeaker crossovers and room correction. Another focus of his work is binaural synthesis with headphones to simulate loudspeakers in space.
He will discuss how high-end headphones show very similar results in measurements but differ in their ability to reproduce details in recorded sound. What is missing here?
Join Meeting
The colloquium will be held in English with two presentations of 30 minutes each and a 30 minute discussion.
Link: https://us02web.zoom.us/webinar/register/WN_vIj8y1KyRGyY5jUYeVGS0g
The Meeting Format: We will be hosting this meeting using Zoom. After registering, you will receive a confirmation email containing information about joining the webinar. For most participants, audio and video are muted when they join the meeting. Later, they can be unmuted (indicated by the raised hand function). This will be explained again at the beginning of the meeting. For a better quality we suggest to use a headset with microphone. The presentation won't be recorded. By turning on your camera you are consenting for your image to be used in a photograph of the event.
Other Business: -
Posted: Sunday, April 4, 2021
Location: Zoom virtual Meeting (5pm Central European Time / Germany time zone)
Moderated by: Rafael Kassier
Speaker(s): Przemek Danowski, Made Indrayana (Indra), Martin Rieger, Dr. Katja Rogers, Dr. Ben Supper
For this month’s AES South German Section Research Colloquium, a roundtable of experts in VR Audio will be discussing the latest and greatest developments in the field, with full interaction with the audience!
The Experts
Przemek Danowski - New media / audio / video / VR specialist
Made Indrayana (Indra) - CTO at Double Shot Audio
Martin Rieger - Freelance 3D Audio Technology & Immersive Content Creator
Discussion - How VR sound drives immersive storytelling?
Dr. Katja Rogers - Postdoctoral Researcher with the HCI Games Group, University of Waterloo
HCIGames Group - Profile Video
Dr. Ben Supper - Supperware Ltd
Presentation at the Audio Developer Conference
Join Meeting
The colloquium will be held in English with a 60 min roundtable discussion.
Link: https://us02web.zoom.us/meeting/register/tZwtf-6prD4tG92PKdbKe-3LXbd-w3INbDtt
The Meeting Format: We will be hosting this meeting using Zoom. After registering, you will receive a confirmation email containing information about joining the webinar. For most participants, audio and video are muted when they join the meeting. Later, they can be unmuted (indicated by the raised hand function). This will be explained again at the beginning of the meeting. For a better quality we suggest to use a headset with microphone. The presentation won't be recorded. By turning on your camera you are consenting for your image to be used in a photograph of the event.
ical Event: Add ical event to your office client.
Other Business: -
Posted: Thursday, March 11, 2021
Location: Zoom virtual Meeting (5pm Central European Time / Germany time zone)
Moderated by: Elena Shabalina - d&b audiotechnik GmbH
Speaker(s): Jonathan D. Ziegler - Institute for Visual Computing, University of Tübingen
Abstract
Human interaction increasingly relies on telecommunication as an addition to or replacement for immediate contact. Remote participation in conferences, sporting events, or concerts is more common than ever, and with current global restrictions on in-person contact, this has become an inevitable part of many people’s reality. The work presented here aims at improving these encounters by enhancing the auditory experience. Augmenting fidelity and intelligibility can increase the perceived quality and enjoyability of such actions and potentially raise acceptance for modern forms of remote experiences. Two approaches to automatic source localization and multichannel signal enhancement are investigated for applications ranging from small conferences to large arenas.
Three first-order microphones of fixed relative position and orientation are used to create a compact, reactive tracking and beamforming algorithm, capable of producing pristine audio signals in small and mid-sized acoustic environments. With inaudible beam steering and a highly linear frequency response, this system aims at providing an alternative to manually operated shotgun microphones or sets of individual spot microphones, applicable in broadcast, live events, and teleconferencing or for human-computer interaction.
Multiple microphones with unknown spatial distribution are combined to create a large-aperture array using an end-to-end Deep Learning approach. This method combines state-of-the-art single-channel signal separation networks with adaptive, domain-specific channel alignment. The Neural Beamformer is capable of learning to extract detailed spatial relations of channels with respect to a learned signal type, such as speech, and to apply appropriate corrections in order to align the signals. This creates an adaptive beamformer for microphones spaced on the order of up to 100 m.
The Presenter
Jonathan D. Ziegler is a PhD student at the Wilhelm Schickard Institute for Visual Computing at the Eberhard Karls University in Tübingen, focusing on deep learning and audio signal processing. He received his degree in Physics from the Karlsruhe Institute of Technology. In the past five years he has completed two large research projects as part of the Institute for Applied Artificial Intelligence at the Stuttgart Media University, closely collaborating with industry leaders in microphone and console design. In 2020, he joined the console manufacturer Lawo as a machine learning engineer, working on model optimization for real-time applications. He has more than fourteen years of experience as a musician and producer, and ran a small recording studio for over ten years.
Join Meeting
The colloquium will be held in English with a 30 min presentation.
Link: https://us02web.zoom.us/webinar/register/WN_RbvV4Pr9TWixDYYafafaVA
The Meeting Format: We will be hosting this meeting using Zoom. After registering, you will receive a confirmation email containing information about joining the webinar. Most participants will have audio and video muted during the meeting. The moderator will un-mute participants in turn to ask a question during the Q&A period. This will be explained again at the beginning of the meeting. For a better quality we suggest to use a headset with microphone.
The presentation will be recorded. By unmuting your microphone you are consenting for your voice to be recorded. By turning on your camera you are consenting for your image to be recorded, which may also be used in a photograph of the event.
ical Event: Add ical event to your office client.
Other Business: there will be a Q&A session after the talk.
Posted: Wednesday, February 3, 2021
Location: Zoom virtual Meeting (5pm Central European Time / Germany time zone)
Moderated by: Elena Shabalina - d&b audiotechnik GmbH
Speaker(s): Lukas Benedicic - FH Joanneum Graz and University of Performing Arts Graz
Abstract
The research work deals with the production of audio content optimized for playback via the Echo Studio Smartspeaker as a case study for stereo-upmix-capable playback systems. The topic of 3D audio is considered one of the most promising within the audio community, but its establishment into the mainstream seems to be a long way off. This 'inertia' could be due to the currently still rather limited accessibility of 3D audio, as this content can usually only be played back via special systems. The vast majority of people still consume audio content in stereo. In order to still be able to offer immersive listening experiences, playback systems such as the Amazon Echo Studio or Soundbars such as those from Sennheiser use stereo upmix technologies, which means that no special audio formats are required. The goal of this work is to find out how to best use these technologies from a production point of view. This will help to determine whether such technologies should be considered in future productions.
The Presenter
I am a Sound Designer and Musician, currently finishing my studies at the FH Joanneum Graz and the University of Music and Performing Arts Graz. My academic journey started with Musicology, where I got my bachelor's degree. Between my bachelor's and master's studies I completed an audio engineering course, which certifies me as an audio engineer. Besides the formal education, I try to stay busy improving my skills, whether it be sound design related or musical.
Join Meeting
The colloquium will be held in English with a 45-50 min presentation.
Link: https://us02web.zoom.us/webinar/register/WN_F4VxthiKToeNiWSq1-zXvA
The Meeting Format: We will be hosting this meeting using Zoom. After registering, you will receive a confirmation email containing information about joining the webinar. Most participants will have audio and video muted during the meeting. The moderator will un-mute participants in turn to ask a question during the Q&A period. This will be explained again at the beginning of the meeting. For a better quality we suggest to use a headset with microphone.
The presentation will be recorded. By unmuting your microphone you are consenting for your voice to be recorded. By turning on your camera you are consenting for your image to be recorded, which may also be used in a photograph of the event.
ical Event: Add ical event to your office client.
Other Business: there will be a Q&A session after the talk.
Posted: Saturday, January 23, 2021
If you would like to keep up to date with the latest section news and events, please subscibe to our new mailing list.
Posted: Tuesday, January 19, 2021
Location: Zoom virtual Meeting (5pm Central European Time / Germany time zone)
Moderated by: Elena Shabalina - d&b audiotechnik GmbH
Speaker(s): Nadja Schinkel-Bielefeld - Sivantos GmbH
Abstract
Hearing aid functionality changes with the acoustic situation and is steered by complex algorithms. This makes it necessary to evaluate hearing aids not only under specific, well controlled conditions in the laboratory, but also in real life. A method to do so is ecological momentary assessment. This typically involves subjects filling out several questionnaires per day describing their experience with the hearing aids in the current moment and the acoustic environment they are in. In addition, objective data about the acoustic situation can be collected from the hearing aids.
Compared to traditional home trial studies and retrospective questionnaire this has the advantage of having less memory bias and being more context sensitive, as results can be evaluated for different acoustic situations. However, as subjects may avoid difficult situations or change their behaviour depending on the hearing aids, subjective ratings of the hearing aids alone may not give an accurate picture without considering the experienced environments as well.
In my talk I will describe some example EMA studies we did – including a completely contactless one – and will discuss advantages and challenges of this method.
The Presenter
Nadja Schinkel-Bielefeld studied physics at the Universities of Bremen (Germany) and Durham (UK), graduating from the latter with a M.Sc. in Elementary Particle Theory. She obtained a doctorate from Bremen University for research on probabilistic models of human contour integration, conducted at the Institute of Theoretical Neurophysics. She went on to do two postdocs in the United States. The first one was in the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology, where she worked models for human perceptual organization and its implications for computer human interactions. The second one was in the NeuroTheory Lab at the University of Maryland, College Park, where she worked on nonlinear models of single neuron computation in the auditory system. After returning to Germany, she worked at the Fraunhofer Institute of Integrated Circuits conducting research on listening test methodology for the subjective evaluation of speech and audio-coded material. In 2017 she joined Sivantos GmbH where she focuses on big data analysis and the evaluation of hearing aids in everyday life using ecological momentary assessment.
Join Meeting
The colloquium will be held in English.
Link: https://dbaudio.zoom.us/j/92281799832?pwd=dkEzNStLNXVubmM0ZHRhalZqdVB6QT09
Meeting-ID: 922 8179 9832
Password: 404494
The Meeting Format: We will be hosting this meeting using Zoom. Most participants will have audio and video muted during the meeting. The moderator will un-mute participants in turn to ask a question during the Q&A period. This will be explained again at the beginning of the meeting. For a better quality we suggest to use a headset with microphone.
ical Event: Add ical event to your office client.
Other Business: there will be a Q&A session after the talk.
Posted: Tuesday, December 22, 2020
Location: Webex virtual meeting
Moderated by: Dr. Rafael Kassier
The final meeting of 2020 - a year of great upheaval, but one in which we made a start to rebuilding the South German Section! This will be the 6th meeting (5th virtual).
Elena Shabalina has been planning the upcoming research colloquium events, and we have FOUR sessions planned next year from January-April!
Agenda:
Join the Meeting
This meeting is open to AES Members and AES Non-Members.
Link: Webex-Meeting
Meeting Password: sYMK97uMsG3
The Meeting Format: We will be hosting this meeting using Cisco Webex. For a better quality we suggest to use a headset with microphone.
Posted: Sunday, December 13, 2020
To receive all information about events and meetings of the AES South German Section and to keep updated about the latest news, please check your contact preferences.
Three steps are necessary:
1. You must be an AES member and log in to the AES website. Furthermore you must have a valid email address in you basic information.
2. Check if you are a member of the AES South German Section in the AES Member Portal. If you are not a member of the South German Section, please write an email to change your affiliation.
3. Check your communication preferences. We recommend to opt in to the lists:
Please ensure that the "Do Not Email" checkbox above is unchecked.
Posted: Sunday, December 13, 2020