Faster algorithm could lead to more realistic sounds in VR

It takes seconds to generate a new model, not hours.

Producing realistic sound models in VR is tricky, even compared to conventional video games. You don't always know how objects will sound in a given environment or where the listener will be, and you don't have the luxury of waiting hours for conventional sound modelling to finish. Thankfully, Stanford researchers have found a way to produce those models in a viable time frame. Their algorithm that can calculate 3D sound models in mere seconds -- not real-time, but quickly enough that you could pre-calculate sound models for very specific situations.

The solution ultimately involved ditching the traditional approach to calculating sound propagation, which has its roots in the theories of pioneering scientist Hermann von Helmholtz. Instead, the scientists took their cue from composer Heinrich Klein's ability to blend many piano notes into a single sound. It splits vibration modes into individual chords, then performs a time-based soundwave simulation for those chords. A GPU-boosted system rapidly computes the acoustic transfer calculations using a single cube map.

The result is a system with good-enough sound synthesis that can complete models hundreds to thousands of times faster. It could be a while before this reaches software you can use, but it could easily lead to accurate VR clangs, crashes and thuds that have you looking over your shoulder and otherwise wondering if they aren't coming from somewhere in your home.