Frameworks like threejs paper over this, so many end users do not need to deal with this, but doing this in vanilla WebXR/WebAudio code is tricky. Furthermore, there's an additional delay introduced with having to ferry this information from the XR frame update to the WebAudio rendering thread.
A nice API to have would be a WebAudio integration where you can attach an XRSpace to the AudioListener and PannerNodes, perhaps with a scaling factor, and the audio rendering thread is then allowed to directly fetch position info at whatever cadence it would like. (This also means that under the hood the panning code can take into account velocity and other features to better predict the head position at render time)
Currently you need to do a bunch of per-frame work to update
AudioListener
s andPannerNode
positions (see the examples in https://github.com/immersive-web/webxr/pull/930).Frameworks like threejs paper over this, so many end users do not need to deal with this, but doing this in vanilla WebXR/WebAudio code is tricky. Furthermore, there's an additional delay introduced with having to ferry this information from the XR frame update to the WebAudio rendering thread.
A nice API to have would be a WebAudio integration where you can attach an
XRSpace
to theAudioListener
andPannerNode
s, perhaps with a scaling factor, and the audio rendering thread is then allowed to directly fetch position info at whatever cadence it would like. (This also means that under the hood the panning code can take into account velocity and other features to better predict the head position at render time)