AES San Francisco 2012
Live Sound Track Event LS11
Monday, October 29, 11:00 am — 12:30 pm (Room 120)
Live Sound Seminar: LS11 - Audio DSP in Unreal-Time, Real-Time, and Live Settings
Chair:Robert Bristow-Johnson, audioImagination - Burlington, VT, USA
Panelist:
Kevin Gross, AVA Networks - Boulder, CO, USA
Abstract:
In audio DSP we generally worry about two problem areas: (1) the Algorithm: what we're trying to accomplish with the sound and the mathematics for doing it; and (2) Housekeeping: the "guzzintas" and the "guzzoutas," and other overhead. On the other hand is the audio processing (or synthesis) setting which might be divided into three classes: (1) Non-real-time processing of sound files; (2) Real-time processing of a stream of samples; (3) Live processing of audio. The latter is more restrictive than the former. We'll get a handle on defining what is real-time and what is not, what is live and what is not. What are the essential differences? We'll discuss how the setting affects how the algorithm and housekeeping might be done. And we'll look into some common techniques and less common tricks that might assist in getting non-real-time algorithms to act in a real-time context and to get *parts* of a non-live real-time algorithm to work in a live setting.