Professor Aalto University Espoo, Uusimaa, Finland
Abstract: A method for rendering virtual sources is proposed, where the sources are perceived to be closer than the loudspeaker array radius. The development is based on an informal finding about closer perception of the range of virtual sources rendered coherently over multiple loudspeakers. To avoid sound quality issues inherent with coherent rendering, the input is split to two streams, one with more transients and the other with smoother temporal envelope. The transient stream is rendered over coherent reproduction, and the continuous stream is processed with time-frequency-domain spreading technique. The results from localization tests with moving sources show that the proposed method produces the perception of closer distances on both sweet-spot and off-sweet-spot listening.
Winner of the 149th AES Convention Best Paper Award