Acoustic Texture Rendering for Extended Sources in Complex Scenes
- Zechen Zhang ,
- Nikunj Raghuvanshi ,
- John Snyder ,
- Steve Marschner
ACM Transactions on Graphics (SIGGRAPH Asia 2019) | , Vol 38(6)
[Try this interactive notebook (opens in new tab) to experiment and hear acoustic texture in a browser!]
Extended stochastic sources, like falling rain or a flowing waterway, provide an immersive ambience in virtual environments. In complex scenes, the rendered sound should vary naturally with listener position, differing not only in overall loudness but also in texture, to capture the indistinct murmur of a faraway brook versus the bright babbling of one up close. Modeling an ambient sound as a collection of random events such as individual raindrop impacts or water bubble oscillations, this variation can be seen as a change in the statistical distribution of events heard by the listener: the arrival rate of nearby, louder events relative to more distant or occluded, quieter ones. Reverberation and edge diffraction from scene geometry multiply and mix events more extensively compared to an empty scene and introduce salient spatial variation in texture. We formalize the notion of acoustic texture by introducing the event loudness density (ELD), which relates the rapidity of received events to their loudness. To model spatial variation in texture, the ELD is made a function of listener location in the scene. We show that this ELD field can be extracted from a single wave simulation for each extended source and rendered flexibly using a granular synthesis pipeline, with grains derived procedurally or from recordings. Our system yields believable, real-time changes in acoustic texture as the listener moves, driven by sound propagation in the scene.
Acoustic Texture Rendering for Extended Sources in Complex Scenes
Supplementary technical video for SIGGRAPH Asia 2019 paper.