Headphone-Based Virtual Spatialization of Sound with a GPU Accelerator

Autores UPV
Año
Revista JOURNAL OF THE AUDIO ENGINEERING SOCIETY

Abstract

Multichannel audio signal processing has undergone major development in recent years. The incorporation of spatial information into an immersive audiovisual virtual environment or into video games provides a better sense of ¿presence¿ to applications. In a binaural system, spatial sound consists of reproducing audio signals with spatial cues (spatial information embedded in the sound) through headphones. This spatial information allows the listener to identify the virtual positions of the sources corresponding to different sounds. Headphone-based spatial sound is obtained by filtering different sound sources through a collection of special filters (whose frequency responses are called Head-Related Transfer Functions) prior to rendering them through headphones. These filters belong to a database composed by a limited number of spatial fixed positions. A complete audio application that can render multiple sound sources in any position of the space and virtualize movements of sound sources in real time demands high computing needs. Graphics Processing Units (GPUs) are highly parallel programmable coprocessors that provide massive computation when the needed operations are properly parallelized. This paper presents the design of a headphone-based multisource spatial audio application whose main feature is that all required processing is carried out on the GPU. To this end, two solutions have been approached in order to synthesize sound sources in spatial positions that are not included in the database and to virtualize sound sources movements between different spatial positions. The results show that the proposed application is able to move up to 240 sources simultaneously .