AlloSphere-Research-Group / allolib

Library for interactive multimedia application development
BSD 3-Clause "New" or "Revised" License
36 stars 15 forks source link

Feature Request|Head-Related Transfer Function binaural spatializer #13

Open kybr opened 5 years ago

kybr commented 5 years ago

The title says it all. We pull in the Zita convolver somewhere, correct? So the hard part is done! It seems like it should not be that hard to hack out something that fits the spatial audio API. Perhaps the example for HRTF would be the app that does a search/wizard to find the HRIR data that best fit your ears.

mantaraya36 commented 5 years ago

AlloSystem does have a Convolver that uses zita convolver, but it has not been ported to allolib. It shouldn't be hard to do. However, the hard part with an HRTF spatializer is allowing dynamic positioning and head tracking as you need an efficient and clean way to "interpolate" between the HRIRs. This is not trivial, but perhaps has received a lot of attention recently and perhaps there is something ready made we could use...

On Mon, Nov 5, 2018 at 3:36 PM karl yerkes notifications@github.com wrote:

The title says it all. We pull in the Zita convolver somewhere, correct? So the hard part is done! It seems like it should not be that hard to hack out something that fits the spatial audio API. Perhaps the example for HRTF would be the app that does a search/wizard to find the HRIR data that best fit your ears.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/AlloSphere-Research-Group/allolib/issues/13, or mute the thread https://github.com/notifications/unsubscribe-auth/AAatxzuQIfTAvdhhVknh7GrdIHN3DmVCks5usMtkgaJpZM4YPeAn .

grrrwaaa commented 5 years ago

Oculus, Valve etc. have been active in putting out SDKs based around HRTFs. I played with some of these in trying to make a [vr~] MSP external last year, and found that those SDKs were definitely more approachable than what was previously available, but pretty rough around the edges. The Oculus SDK was a headache last year but might be a lot better today, haven’t looked for a while. The steamaudio one does a good job of cross-fading the HRTFs, but oddly did not take into account the distinct locations of ears. A simple hack around it (performing the HRTF and distance filtering separately for each ear) led to quite significant improvements for sound sources close to the observer. I was pretty amazed that they hadn’t even thought about that in the API…. Anyway the HRTF results with the steamaudio seemed fine after I worked around this inadequacy.

The biggest challenge I was finding was how to smoothly interpolate doppler effects without introducing warbling. I was looking at Kalman filters but couldn’t figure out what I needed. Basically, with position updates coming in at anywhere from 30Hz to 90Hz (including jitter), one needs to compute a smooth curve of motion at samplerate, which will be used for the doppler delay lines. Smoothing that means that the 2nd derivative should be very minimal and have no corners, or else warbling effects or robotic effects will be quite apparent. One particular insight I had is that the accuracy of distance is more important than the accuracy of position (especially for Doppler): our positional acuity is much worse than our acuity to small variations in pitch. If anyone wants to help me look at this I’d really appreciate it. I posted some progress on the Max forums here: https://cycling74.com/forums/audio-workflow-for-vr-max-worldmaking-package and I have some more detailed work I’d be very happy to share. It’s mostly in Gen so could very easily export into Allosystem/allolib.

Graham

Graham Wakefield Assistant Professor, Department of Computational Arts Canada Research Chair (Tier II) in Interactive Visualization, director of the Alice Lab for Computational Worldmaking York University, Toronto 1 416 400 7421 worldmaking.github.io www.artificialnature.net

On Nov 6, 2018, at 1:13 AM, Andres Cabrera notifications@github.com wrote:

AlloSystem does have a Convolver that uses zita convolver, but it has not been ported to allolib. It shouldn't be hard to do. However, the hard part with an HRTF spatializer is allowing dynamic positioning and head tracking as you need an efficient and clean way to "interpolate" between the HRIRs. This is not trivial, but perhaps has received a lot of attention recently and perhaps there is something ready made we could use...

On Mon, Nov 5, 2018 at 3:36 PM karl yerkes notifications@github.com wrote:

The title says it all. We pull in the Zita convolver somewhere, correct? So the hard part is done! It seems like it should not be that hard to hack out something that fits the spatial audio API. Perhaps the example for HRTF would be the app that does a search/wizard to find the HRIR data that best fit your ears.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/AlloSphere-Research-Group/allolib/issues/13, or mute the thread https://github.com/notifications/unsubscribe-auth/AAatxzuQIfTAvdhhVknh7GrdIHN3DmVCks5usMtkgaJpZM4YPeAn .

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.