GodotVR / godot_oculus_mobile

Godot Oculus mobile drivers (Oculus Go / Oculus Quest)
MIT License
168 stars 33 forks source link

Integrate oculus audio spatializer #110

Open ellenhp opened 4 years ago

ellenhp commented 4 years ago

Hey, not sure if this has been discussed already but I'm curious if anyone has attempted to integrate the oculus audio spatializer into this plugin. I played around an AudioStreamPlayer3D and it seems like godot's default audio system is nowhere near good enough for compelling VR. I don't mind taking a stab at integrating the library, but I wanted to drop an issue here to track it and in case anybody has advice/recommendations.

Wavesonics commented 4 years ago

@ellenhp that would be super cool, would this be for things like HRTF?

ellenhp commented 4 years ago

Yeah, the main benefit is getting their HRTF pipeline. That said, looking through the docs shows they do also have the ability to take 3d meshes and dynamically estimate reverberation for a given environment. I'm not sure how simplified these meshes have to be, and whether or not that's something that's feasible to integrate.

The basic idea of what I'm thinking is subclassing AudioStreamPlayer3D (call it ARVRAudioStreamPlayer3D or something) for the audio sources and rerouting the audio from these players to the oculus API if available, otherwise going through the default audio system. If I end up figuring out how to pass meshes to the audio system I'll likely do so by letting developers add a ARVRAudioMesh node as a child of their big static meshes like buildings and terrain. The node would handle extraction of the mesh from the parent and passing it down to native code where it can be sent to the oculus audio driver. That's the first design that popped into my head, so I'm very open to feedback.

I do want to emphasize though that the primary thing I'm hoping to accomplish is getting the HRTF stuff working. I think that's the bare minimum. I can imagine reverb being a huge pain, so I don't want to over-promise.

m4gr3d commented 4 years ago

That sounds like a good approach!

I would involve @BastiaanOlij in the design for the ARVRAudioStreamPlayer3D class as we'd probably want an API flexible enough to support other VR (and maybe AR) backends.

BastiaanOlij commented 4 years ago

I can't say I've dived into the audio side of Godot much at all. Eric (Oculus) did mention before he'd be interested in seeing us support the audio side of the Oculus SDK.

From what little I played around with it with the gun tutorial I did a sound played can be location based so it adjust volume per ear and optionally pitch for a moving sound source, that audio stream is then input into Godots audio bus system where effects are added and sounds of different sources are mixed to create a final output to the speakers.

I'm not sure how that would fit in with the approach Oculus takes and how the two could be combined or make use of eachother.

ellenhp commented 4 years ago

After discussion on the discord, I elected to try to integrate Resonance Audio into godot instead of linking with a platform-specific audio library. There's a proof of concept of this integration here: https://github.com/ellenhp/godot/tree/hrtf

I'm rewriting it and working on a PR to add Resonance Audio to godot knowing that it's not super likely to be accepted as-is. Hopefully it'll start the discussion though.

knochenhans commented 2 years ago

Hi, I’m really interested in using HRTF on the Oculus Quest in Godot as I consider it essential for VR. Did anything ever came out of this?

ellenhp commented 2 years ago

I did manage to integrate resonance audio into godot, but it broke with a recent change for 4.0. @fire has been working on fixing it I think. He also maintained a 3.x branch of the audio spatialization code for a while.

fire commented 2 years ago

https://github.com/V-Sekai/godot/tree/feature/spatial-audio is my branch with a demo project at https://github.com/V-Sekai/godot-spatial-audio-project

m4gr3d commented 2 years ago

@fire Do you plan to open a PR to integrate your spatial audio changes?

fire commented 2 years ago

I am currently spread thin, so I wasn't expecting to open a pull request to Godot Engine 4.0.

@ellenhp and I didn't have a good design for the Resonance Audio technology so the api is jankly integrated.

Assistance welcome and appreciated.

We sketched out a design where we extract the ambisonic coding from the Resonance library, but I'm more focused on game related things.

There was also a reverb audio based on a variation of a nav mesh but for open spaces. There is a prototype youtube video, but I think the last gap was generating the "sound graph" node for transmission of the reverb audio network rather than hand coding the graph.

ellenhp commented 2 years ago

Yeah, there are a few things we could do. Reduz made a proposal for one way we could add spatial audio if I remember right. But doing it in a non-disruptive way to the rest of the engine is tricky and would take a lot of work. Depending on how much of a hurry you're in to get something working I'd consider forcing Godot into 7.1 mode and applying a HRIR to each channel during down-mixing to stereo. You could also increase the maximum number of channels slightly (from 8 to 9, rounding up to five l/r pairs) and then use those 9 channels to encode a second order ambisonic representation of the sound from each 3d source. Then mixing would happen as normal and you could inject some ambisonic decoding code into the final mixing step.

Come to think of it, I wonder how hard it would be to replace the current channels system with something a bit more generic. An array of 4 AudioFrames and a hard cap at 8 channels is a bit limiting, and the mapping from those 4 AudioFrames to a 7.1 system is so artificial that I wouldn't mind if someone improved it (3 of the AudioFrames correspond to symmetric speaker pairs but for one of them the left goes to subwoofer, right goes to center, or vice versa). I personally think there's a lot of room for a cleanup of the channels system, and if it were cleaned up in a forward-thinking way, it's entirely possible that the channel systems could be pluggable. If SPCAP became a module, spatial audio could also be a module which would make the maintenance burden so much smaller for people who need to do spatial audio.

I also really believe this is important because, plainly, SPCAP performs very poorly for stereo audio with default parameters. It's passable if you have free-standing stereo speakers but for headphones it really is unacceptably bad IMO. If we modularize the channels system we could maintain SPCAP as a loudness-preserving panning system for 3.1, 5.1, 7.1 and introduce some other, better panning algorithm for stereo which is what 90% of our users will be doing anyway.