Closed Kappa971 closed 2 years ago
I believe this should be reopened. Or if not any, I just read of a guy taking a very uneducated guess about dsoal still being a big work in progress because there isn't really a plethora of features advertised, and of course everybody knows the most of the best is EAX 5. But of course that cannot be a thing here if the most that existed over directsound is the fourth version (correct, yes?).
I don't know any DirectSound3D games that use EAX 5.0, I don't think they exist. The fact that DSOAL is still a work in progress project I think is true (but this also applies to EAX in OpenAL Soft), as there are some unsolved problems with some games like Prey and Hitman 2. I reported them, I can't help else, I'm not a programmer and I can't claim anything (no one should, they are free projects).
Sure, it's a work in progress in the sense that there are still known bugs (and I guess, kcat is more time-limited atm than "ideas-limited" to declare this stable).
But it's not work in progress in the sense of being still a vestigial prototype. Every feature is implemented now, to the best of available knowledge, and I believe it should already be giving ALchemy a big run for its money.
As far as features go, there's not much to list; it's DSound3D, and EAX 1-4 which is listed in the readme. At most, I guess it could be mentioned that it supports surround sound and HRTF, though I'm not quite sure what native Windows' does for DSound, if they've added support for surround sound and HRTF as well (and it seems kind of silly to list a feature for DSOAL that's already there in normal Windows). I believe I've seen games some games think there's hardware DSound even with native (without ALchemy or similar), though I could be mistaken.
Regarding DSOAL being a big WIP, it sort of is and sort of isn't. The basic DSound3D functionality works fine for most things, so in that sense, it's pretty complete. But EAX support seems to have various issues all over, some more serious than others (that some people may be more tolerant of than others), so how well that works seems to be a lot more hit-and-miss. But the issue is, the actual cause for most the these problems is completely obscure. Like Hitman missing random sounds for no apparent reason, or reverb being on or not on sounds it shouldn't or should be, where the trace log gives no hint as to the actual problem. EAX itself is so under-documented (particularly 3 and 4) that it's hard to say if OpenAL Soft's EAX code is at fault, or if the app is at fault and native drivers had a workaround, or if DSOAL is mishandling something between the app and OpenAL Soft.
As it is with OpenAL Soft's EAX support, it's like trying to guess what's in a sealed box that I can't touch or glance at. I can't make test apps to see how it behaves with real hardware, so I have to rely on others with hardware for feedback that I have to interpret. It's not as simple as writing a program for someone to run that they give me the output of since I don't know where to look for issues... I just need to start with simple checks in random places to see if the results are as expected, and focus in on where things look off if I find anything. And the results aren't always something I can log, but can instead be a change (or lack thereof) in the actual audio playback. That kind of dynamic/reactive testing doesn't work well if I can't adjust the tests on the fly.
Vista did add "multi-channel software buffers", and I know 360-era Fallout games still successfully retain surround (even though it was very muffled and subpar without ALchemy.. I guess perhaps nobody did account for the lack of a good mixing engine). And insofar as Spatial Sound isn't WASAPI either, I don't believe directsound should get a different treatment.
But anyway, what I meant was mentioning the individual EAX features (or is there even some interesting directsound cap?). In further retrospect though, I guess that could be more visual clutter than anything. People can read wiki for what "everything" entails. Feature-completeness might still be somewhat better worded/marketed/underlined imho. Like, maybe it's just age, but I have this feeling that the general reputation/understanding of the thing is still stuck to 2015 levels.
But the issue is, the actual cause for most of these problems is completely obscure.
Yes, that's more or less what I suspected too (even though I didn't think that you were barely even trying anymore by yourself). Quirks shouldn't detract from the general assessment of the library though. It's not a matter of specification, but implementation now.
... If your expectation is one of stagnation for a "fair time to come", wouldn't it be time to release any whatever "official" version (#52)? Don't call it stable (even though it is tbh, allegedly at least better than ALchemy 1.45) or 1.0 (even though it makes you wonder if you aren't aiming a bit too much for absolute perfection) but I reckon anything between 0.7 and 0.95 would fairly do it too.
I can't make test apps to see how it behaves with real hardware
Could.. uhm, remote "debugging" help? Like VNC, RDP or even parsec. Like, I'm pretty confident that at least someone between the @IDrinkLava, @ThreeDeeJay and PCGW servers (and there are surely more, but I hate discord so I'm pretty ignorant of others) would be glad to prepare an old machine for you given some time. I mean, probably even ship whole cards but I guess setting up W7/XP and games is an inconvenience that you'd rather avoid.
But anyway, what I meant was mentioning the individual EAX features (or is there even some interesting directsound cap?). In further retrospect though, I guess that could be more visual clutter than anything. People can read wiki for what "everything" entails. Feature-completeness might still be somewhat better worded/marketed/underlined imho.
I don't think there's any interesting caps to mention, aside from the fact that it emulates hardware buffers. I don't think feature-completeness is a good thing to advertise at this point, considering DS3D's effect API isn't supported at all (which some games do use). Although I guess it might be an idea to mention more specific features beyond just saying "EAX", like that it supports the other effects besides just reverb, and it can utilize higher quality resamplers, various surround sound methods like discrete surround sound (with individual speaker distance compensation), HRTF, and UHJ, and near field effects (not MacroFX, but a more natural/automatic distance-based effect). Getting with-height output modes working would be a good thing to mention, whenever that comes to fruition.
If your expectation is one of stagnation for a "fair time to come", wouldn't it be time to release any whatever "official" version (https://github.com/kcat/dsoal/pull/52)?
I don't expect stagnation. Though I think at this point, much of the improvements will be on the OpenAL side since that's where the EAX code is now. I don't think I'll need to do anything with DSOAL to fix the EAX issues, unless there's a difference between OpenAL and DSound EAX behavior. Not that there isn't anything to do with DSOAL itself; I'd like to rewrite it in cleaner C++ instead of the C it currently is, and perhaps use loopback devices for rendering to have better control over the output and notification events. I plan to have an OpenAL Soft release "soon"ish, so probably after that (and dealing with any critical issues for a point release), I can put together a basic DSOAL "release" before starting on a rewrite/refactor.
Could.. uhm, remote "debugging" help? Like VNC, RDP or even parsec. Like, I'm pretty confident that at least someone between the IDrinkLava, 3DJ and PCGW servers (and there are surely more, but I hate discord so I'm pretty ignorant of others) would be glad to prepare an old machine for you given some time.
Maybe. It's not something I've done before, but if there's a way I can remotely use a machine with such hardware, where I can code and build things (or build here that I can easily transfer over) to run on that machine, and have the audio output piped back to me so I can hear what the device is doing, that could work maybe.
Having a card shipped to me wouldn't really be helpful since I don't have a Windows machine that could use it, or a copy of Windows to use in a VM that could still access the hardware.
I don't think there's any interesting caps to mention, aside from the fact that it emulates hardware buffers.
Quite pointless indeed. Speaking of buffers and still inside this brainstorm of a more comprehensive readme... what do you think about this drawback?
considering DS3D's effect API isn't supported at all (which some games do use). Although I guess it might be an idea to mention more specific features beyond just saying "EAX"
Uh, well, nice to know. That's really a diverse array of interesting ones (even though I guess many capabilities could be just aliased as "gives dsound access to all the power of openal-soft"). Is there much left to guess about DSFX(dsdmo?) that the wine guys haven't already figured out then?
I can put together a basic DSOAL "release" before starting on a rewrite/refactor.
Uh, well, cool. Whenever you want is fine for me anyway, even after that. p.s. do you believe there are more.. unknown unknowns deep somewhere in the dsound-specific part, or is the can of worms just basically EAXs?
or a copy of Windows to use in a VM that could still access the hardware.
It is my understanding that should def be possible, somehow. And Windows's a cakewalk to use even without a license (I mean, not in the sense that workarounds are easy, but really that even in "demo mode" the OS doesn't limit any of the basic features). Let's hope github sends ping notices even with edits /s
Speaking of buffers and still inside this brainstorm of a more comprehensive readme... what do you think about this drawback?
That's actually something I've thought about. As it is now with OpenAL Soft, the sources are "virtualized" and separate from the actual mixing voices. So I could in theory ask OpenAL to allow for a bajillion sources, allocate 128 of them for "hardware" buffers, and allocate the rest on-demand for software buffers, making the only limit memory constraints. Only the number of sources that get played will affect mixing performance, but if native DSound doesn't have a limit on the number of simultaneously playing software buffers, it would have performance problems if a lot of them were played too. There are a couple places where OpenAL Soft needs to loop over all allocated sources (on an app/caller thread, not the mixer) even if they're not playing, but I'm not sure what performance impact that would have.
Is there much left to guess about DSFX(dsdmo?) that the wine guys haven't already figured out then?
As far as the functionality goes, I don't think so. At minimum I'd need to stub out the interfaces so the app can call the functions on it and not crash, even if the functions do nothing, but Wine's implementation should be able to help with that. Actually making it work with OpenAL effects will be the part that Wine's implementation can't help with, but I'm not sure that's terribly important.
p.s. do you believe there are more.. unknown unknowns deep somewhere in the dsound-specific part, or is the can of worms just basically EAXs?
It's largely just EAX at this point. The only significant unknown with DSound itself that I can think of is the {2a8af120-e9de-4132-aaa5-4bdda5f325b8}
GUID that relates to some internal interface used by DMusic, but that doesn't seem to affect many apps.
but if native DSound doesn't have a limit on the number of simultaneously playing software buffers, it would have performance problems if a lot of them were played too.
Well, so be it then if that was to be the case. Potential slowness in the long run seems better than "sounds stop to be added at all". And even on native XP it's probably something that those interested should rather fix for good on the application side (e.g. eaxefx).
Anyhow, it just came to me what's an actually huge blocker: https://github.com/kcat/dsoal/issues/34 We still haven't figured out when, how and why normal dll hijacking doesn't work anymore.
Like, I'm pretty confident that at least someone between the IDrinkLava, ThreeDeeJay and PCGW servers (and there are surely more, but I hate discord so I'm pretty ignorant of others) would be glad to prepare an old machine for you given some time.
@mirh I just have a USB X-Fi sound card that I passthrough into a VM to test how games are supposed to sound in XP, but to be honest I've been doubting its accuracy compared to a real, PCI/e X-Fi card because it lacks game mode (only entertainment mode available) and it sounds an awful lot like ALchemy compared to the distinctively tinny CMSS-3D HRTF of internal cards in hardware mode, so I've been wondering ALchemy's built-into the card in some sort of software emulation within a device that otherwise presents itself as using hardware. 🤔
I plan to have an OpenAL Soft release "soon"ish, so probably after that (and dealing with any critical issues for a point release), I can put together a basic DSOAL "release" before starting on a rewrite/refactor.
@kcat By the way, I updated the Github actions release workflow and also plan on optimizing it to get rid redundant code now that I learned how to release multiple builds in a matrix without overwriting the old one, which is why I repeated some code to begin with :sweat_smile:
SB1090 uses the CA0189 chipset, not any of the 20Ks. So we already know anything relying on HOAL (and yes, I can confirm this from USB_SupportPack_2_7) is subpar.
Though I just realized that I have never seen direct comparisons of those newer cards running the *native* directsound driver (if they can even run on XP that is). Not that I have much faith in >2011 Creative knowing what they were doing... but I guess there could still be ways for ksaud.sys
to be "decent" even if not perfect. If it's not a total disaster then.
But anyhow IMO the best shot/investment/opportunity would still be figuring out a comfy workflow to share the hardware over IP. I don't think we are yet to the point that you can virtualize individual slots altogether (even though modern FTTH should have enough bandwidth to match the original PCI, lol) but normal desktop streaming should be already just fine. Like, the only unknown seems to be whether multi-channel is wanted, or if certain solutions couldn't be compressing audio too much (to note, I'm not exactly sure if RDP supports microphones in XP, which is kinda important if you want to use "What U Hear", but between ASIORecAndPlay and a physical loopback hack I'm sure this would be a cakewalk to figure out later).
X-RAM?
It's literally written in the title of the PR. And nor that or EAX5 belongs to dsoal.
other 3D positional standards - for some there are other wrappers, what about adding such support here?
It is called dsoal for a reason, isn't it? Besides, all the stuff you mentioned (except A3D) either isn't even a game api, or it is "hardware calls" from the pre-Windows days.
EAX5 doesn't belong to dsoal - why is that? From what I read EAX5 brings some additional effects and I don't see why those from EAX4 are applicable, but those from EAX5 are not.
A3D - I was focusing on the dsoal part, e.g. to implement 3D sound APIs over OpenAL - both DS3D(+EAX) and A3D...
THX - I struggle with this one, is it more of an upmixing/downmixing/balancing/encoding (like the various Dolby/DTS) - or effects/3D positioning or both or something else... - but it's definitely for Windows, not DOS.
For the others - sorry, yes they're more tightly coupled to hardware, and although some were used with Windows, probably it's too much to handle in a API wrapper way.
There's no evidence EAX5 was ever supported in anything but through openal. Games getting THX certification doesn't mean anything about a dedicated api existing, while "TruStudio" seems regular post-processing sound card gimmicky that nowadays you would install as an APO. EDIT: this perhaps?
A3D already has a (arguably half-assed) open directsound3d wrapper, and putting aside I'm not sure if openal could fit its sound model, I don't know why it wouldn't make more sense to just iterate on what's already available.
EAX5 doesn't belong to dsoal - why is that? From what I read EAX5 brings some additional effects and I don't see why those from EAX4 are applicable, but those from EAX5 are not.
A3D - I was focusing on the dsoal part, e.g. to implement 3D sound APIs over OpenAL - both DS3D(+EAX) and A3D...
THX - I struggle with this one, is it more of an upmixing/downmixing/balancing/encoding (like the various Dolby/DTS) - or effects/3D positioning or both or something else... - but it's definitely for Windows, not DOS.
For the others - sorry, yes they're more tightly coupled to hardware, and although some were used with Windows, probably it's too much to handle in a API wrapper way.
DSOAL is a DirectSound3D + EAX (not accurate but this is due to the policy adopted by Creative where rather than releasing documentation and source codes, they preferred to throw everything in the bin) wrapper and relies on OpenAL Soft also for EAX. Then I have to point out that I don't seem to remember the existence of DirectSound3D games with EAX 5.0 support, the only games are OpenAL games, after which Windows Vista came out.
It would be nice to have Aureal3D support in DSOAL but I think it's already a miracle that EAX works at all considering what I wrote above... I guess the developers don't fully know how it works, they try (thanks Creative). In fact there are games like Hitman 2 that with DSOAL have sound problems and they can't understand what's wrong, if the game is setting wrong parameters and Creative drivers adopted specific hacks for this or if it is using undocumented functions that DSOAL lacks.
OK about the EAX5/OpenAL and THX.
For A3D may be good to have an integrated package of the three components: A3D-to-DS3D + DSOAL + OpenAL. And it seems you think there's no further benefit in making a direct A3D-to-OpenAL wrapper?
Looking at the list of games with 3D sound:
Are those features suitable for DSOAL?
And it seems you think there's no further benefit in making a direct A3D-to-OpenAL wrapper?
There would be advantages for sure but the question is: can or is anyone able to create something similar? EAX as well as Aureal3D technology are all closed source things.
Looking at the list of games with 3D sound:
* Sensaura GameCODA - 2 exclusive games * QSound QMixer - 2 games (there are also 47 QSound games listed at [1](https://www.qsound.com/partners/gaming/pc-games.htm), [2](https://www.uvlist.net/groups/games-list/qsound)) * [EarSound IAS](https://dosdays.co.uk/topics/3daudio_games.php) - 11 games, including some exclusively supporting that standard such as Quake II, Civ2, Mechwarrior2, etc. * [I3DL2](https://www.iasig.org/index.php/projects/projects-menu/21-3d-audio-wg) - DS3D extensions similar to EAX? - 2 exclusive games
Are those features suitable for DSOAL?
Of course if one of the developers can give clarifications about it but I think that, as already said, they are all closed source technologies without documentation available. For example the EAX implementation (was some documentation leaked online?), after years, still has problems and it is not known if they will ever be solved.
And it seems you think there's no further benefit in making a direct A3D-to-OpenAL wrapper?
You would need to create some kind of wavetracing extension. Which, I mean.. it's probably not technically impossible (for as much as it would be quite the stretch of imagination, given the required complexity to import the world geometry) but then the question is why not making it "just native" to begin with. Or I don't know if you couldn't recycle some of the groundwork already lied down by TrueAudio/VrWorks/Steam audio
EDIT: ironically enough the first story of openal did slightly intersected and tried to entertain this concept though
Are those features suitable for DSOAL?
The only one there that is an api (and not just middleware) is I3DL2
Well.. it's more like a specification tbh, but still it is true that IDirectSoundFXI3DL2Reverb8
is a real thing (which shouldn't have been too different from EAX2). Which atm doesn't seem supported.
Then, it might even be that you still need some.. uh, "small regard" in order for them to initialize properly in the year of the lord 2024 - but that's really application and case specific.
On the other hand, an IAS guy in 1999 seemed to claim that ds3d multi-channel support was really misleading and that cards had to step in to workaround the deficiency? Not sure if was just a lowkey dab at the fact that windows couldn't do dolby (AC3 or nope, just prologic) or DTS, or what. EDIT: it's probably explained here, you supposedly couldn't have more than a single pair of channels EDIT2: until mid 1999
p.s. there might still be more if we start to consider pre-windows, pre-directsound, pre-AC97 applications but then again seriously this is starting to become territory for the dos guys
I3DL2 is basically EAX2, specifying a per-buffer/source low-pass filter with an adjustable 5khz reference gain, and a reverb effect with parameters matching EAX2's reverb, which is also equivalent EFX's standard AL_EFFECT_REVERB (which is different from other types of reverb available at the time, that tended to be more abstract with less configurability; think like Freeverb). I thought I had a pdf about it, but I can't find it, and that site's download link doesn't seem to work. IDirectSoundFXI3DL2Reverb8
is Microsoft's implementation of the EAX2/I3DL2-style reverb, fitted to the DSound8 FX API.
That, yeah. The structures and properties all seem to be identical to EAX 2, simply using the I3DL2 moniker instead of EAX, EAX2, or EAX20. They just use different GUIDs for the property sets. It might be possible to simply check for the I3DL2 GUIDs and substitute the EAX 2.0 ones when calling OpenAL with them. Unless there are apps that use both EAX2 and I3DL2 and they're expected to hold separate state, in which case OpenAL Soft would need to recognize the GUIDs and store the properties separately.
Enlightening info, thanks.
So, the unsupported features:
DSOAL has some compatibility issues with the game causing it to crash within seconds of gameplay. Here's the DSOAL error log 047c:warn:dsound:DSShare_Create PKEY_AudioEndpoint_PhysicalSpeakers is not a ULONG: 0x0000 047c:err:dsound:DSBuffer_Initialize Panning for multi-channel buffers is not supported 047c:err:dsound:DSBuffer_SetLoc Out of software sources 047c:err:dsound:DSBuffer_SetLoc Out of software sources
As for:
there might still be more if we start to consider pre-windows ... territory for the dos guys
I see only three chips with 3D audio from the DOS era (e.g. ISA or pre-1996):
QSound - there is DOS/Win3x software using that as API?
SRS - that seems to me as HRTF algorithm rather than 3D API?
A3D
Sector's Edge has a pretty cool implementation of wavetracing using OpenAL Soft but yeah, we'd probably need a direct A3D>OALS wrapper because I really doubt any of the current A3D wrappers process wavetracing calls/parameters/geometry which would be useless in DirectSound3D which AFAIK wouldn't know what to do with it.
AFAIK A3D support ("A3DOAL", if you will) would still require a separate wrapper because games usually check for A3D.dll/A3DAPI.dll. So perhaps something like this could be used as base, replacing calls with OpenAL, and somehow processing wavetracing internally like Sector's Edge https://github.com/worknd/A3D-Live
In the meantime, in some cases it might be better to wrap DS3D directly instead of A3D>DS3D. e.g. Deus Ex has A3D which can be wrapped to DS3D for DSOAL, but using the DS3D mode directly has better positioning.
QSound
Weirdly enough, in some apps, it's compatible with DS3D for DSOAL or even an X-Fi sound card in XP. Not sure if games are this flexible/compatible, though.
Sensaura
Also might support X-Fi/DSOAL like Athene, Donuts, Demonstration. Some games using gameCODA have Sensaura 3D audio built-in without proprietary hardware requirement, and I found out a while ago that we can use Creative's wrap_oal.dll with DSOAL to override it with OpenAL Soft 3D HRTF and EAX, at least in some games https://github.com/kcat/openal-soft/issues/1001#issuecomment-2325040594
I'm not too familiar with the other APIs or sound card capability emulation, but there's a list of the best known methods for 3D audio/reverb on modern systems/any sound cards in the binaural audio database spreadsheet or search views, so if a better method is eventually found and reported, it will be updated. 👀👌
Sector's Edge has a pretty cool implementation of wavetracing using OpenAL Soft
Using AL for the output (and I don't know, perhaps mixing at most) or even for the ray traced part too? Like, I'm no developer but I would expect you to need "some additional spicy thing" to unlock something that in 2024 nobody rolled out anywhere yet.
Weirdly enough, in some apps, it's compatible with DS3D for DSOAL or even an X-Fi sound card in XP.
It's not weird at all, every interview has them flexing they support everything under the sun.
Also might support X-Fi/DSOAL like Athene, Donuts, Demonstration.
Which seems again completely normal for any middleware. At least if released after 1997.
we can use Creative's wrap_oal.dll with DSOAL to override it with OpenAL Soft 3D HRTF and EAX, at least in some games https://github.com/kcat/openal-soft/issues/1001#issuecomment-2325040594
As I said, that insane thing only likely makes sense because the openal32.dll loading has a blacklist against devices without the "Creative" name in it. The rule is always that the wrapper is worse than native openal.
Using AL for the output (and I don't know, perhaps mixing at most) or even for the ray traced part too?
I'm not quite sure how it works under the hood, but AFAIK the wavetracing is custom built for the game which is what's doing the heavy lifting, but it still uses OpenAL Soft for 3D HRTF so it's not like it's applying reverb then passing the already mixed audio to OpenAL Soft, like Bioshock does. Maybe it bakes the reverb into each audio sample or just estimates the closest EAX parameters/room preset? Perhaps @Vercidium could explain how it was done 👀
As I said, that insane thing only likely makes sense because the openal32.dll loading has a blacklist against devices without the "Creative" name in it. The rule is always that the wrapper is worse than native openal.
But how come wrap_oal.dll works even on my Realtek sound card, yet soft_oal.dll doesn't work even when using my Creative SB X-Fi Surround 5.1
Also I tried custom OpenAL Soft builds with the OpenAL device renamed to Generic Software
, Generic Hardware
, and others just out of curiosity but it still didn't work, even with Windows XP compatibility.
But how come wrap_oal.dll works even on my Realtek sound card, yet soft_oal.dll doesn't work even when using my Creative SB X-Fi Surround 5.1
https://github.com/kcat/openal-soft/blob/1.23.1/al/state.cpp#L59
Also I tried custom OpenAL Soft builds with the OpenAL device renamed to Generic Software, Generic Hardware, and others just out of curiosity but it still didn't work, even with Windows XP compatibility.
I don't know/remember what was the difference between AL and ALC but I suspect you aren't even looking in the right file.
But how come wrap_oal.dll works even on my Realtek sound card, yet soft_oal.dll doesn't work even when using my Creative
SB X-Fi Surround 5.1
Also I tried custom OpenAL Soft builds with the OpenAL device renamed toGeneric Software
,Generic Hardware
, and others just out of curiosity but it still didn't work, even with Windows XP compatibility.
https://github.com/kcat/openal-soft/issues/650#issue-1119482275
That's not it considering he's also presumably trying to replace openal32.dll.
Using AL for the output (and I don't know, perhaps mixing at most) or even for the ray traced part too?
I'm not quite sure how it works under the hood, but AFAIK the wavetracing is custom built for the game which is what's doing the heavy lifting, but it still uses OpenAL Soft for 3D HRTF so it's not like it's applying reverb then passing the already mixed audio to OpenAL Soft, like Bioshock does. Maybe it bakes the reverb into each audio sample or just estimates the closest EAX parameters/room preset? Perhaps @Vercidium could explain how it was done 👀
I can't speak for any particular game, but this is the general intended idea for dynamic environments. Cast rays out from the listener to detect size and shape of the room and the materials of the walls, and calculate the reverb parameters to simulate the reverberation for such a room. For older hardware, you may just use that information to select the closest preset, or you can more directly calculate the individual density, diffusion, decay rate, etc, parameters (and also use the direction and distances of the ray tests to calculate the early and late reverb delay and panning vectors to steer it in real time for the distance and direction the early and late stage reflections would be heard from).
Which reminds me. OpenAL Soft's reverb invokes a "costly" pipeline change when changing certain parameters. Adjusting said parameters too often can cause reverb processing to constantly be in a high-cost mode as it's continuously trying to blend between the changed parameters, which can also risk some audio glitching. I do still need to look into how much tolerance there can be with changing those parameters to values that are "close enough" such that they can be changed in-place without needing a pipeline change to prevent audible artifacts. As well as a way to delay applying large changes that are done too close to previous large changes, to avoid audible artifacting from the previous pipeline still being audible when it needs to be reused.
@Kappa971 Turns out the trick was a specific vendor, not device name:
alVendor of the wrapper would in fact manage to pass that! So that's kinda it's entire super magic.
Wait, it actually works!! https://github.com/ThreeDeeJay/openal-soft/commit/dc150c8ce5f7eecd2a5adaa0caf158f5f6062b34 Just gotta rename soft_oal.dll to OpenAL32.dll so there's no need for DSOAL or wrap_oal in Just Cause👏 Setting the Windows speaker config to 7.1 seems to cause missing sounds/crash (maybe gameCODA didn't support it so it gets confused?). MOV still refuses to load OpenAL regardless of UseHardwareSound and XP compat. mrpenguin tested it in Fable but it crashed in-game and we're trying to figure out why. Anyhow, I guess we might need a vendor string setting cuz I doubt it'd use Creative's by default.
I can't speak for any particular game, but this is the general intended idea for dynamic environments. Cast rays out from the listener to detect size and shape of the room and the materials of the walls, and calculate the reverb parameters to simulate the reverberation for such a room. For older hardware, you may just use that information to select the closest preset, or you can more directly calculate the individual density, diffusion, decay rate, etc, parameters (and also use the direction and distances of the ray tests to calculate the early and late reverb delay and panning vectors to steer it in real time for the distance and direction the early and late stage reflections would be heard from).
I wonder if it'd be feasible to develop a ReShade addon to use games' depth/normal/etc buffers for a (partial since FOV is limited ofc) environment space estimation, kinda like the Ray Traced Global Illumination shader does with light. Obviously not as good as the real thing, but it'd probably be way faster and compatible, though I wonder if it'd be better to piggy-back on RTX Remix now that VRWorks Audio was killed upstream. 🤔
In Sector's Edge the raytracing is all part of the game itself and separate from OpenAL. All the audio processing happens in OpenAL with low pass filters and one reverb effect. Raytracing is used to tweak the properties of the effect and filters.
For reverb, raytracing is used to determine:
roomSize
: the size of the room around the listenerreturningRays
: each ray bounces 8 times, and each time it bounces it checks for line-of-sight (LOS) back to the listener. If all rays and all bounces have LOS, returningRays
is 1.0
. If only 25% of bounces have LOS, its value is 0.25
. This controls the strength of the reverbescapedRays
: how many rays escaped out into the skybox/atmosphereThese 3 variables are used to interpolate between reverb presets, for example a castle map uses these four:
noReverb
smallStoneRoom
largeStoneRoom
rollingHills
I then linearly interpolate between these reverb presets in this order:
smallStoneRoom
and largeStoneRoom
based on roomSize
, to produce a new netStoneRoom
reverb presetnoReverb
and netStoneRoom
based on returningRays
, to produce a new room
presetroom
and rollingHills
based on escapedRays
, to produce the final netReverb
preset.These reverb presets have properties like flDensity
, flDiffusion
, flGain
, etc, which are then used to control the parameters of an OpenAL Effect:
alEffecti(effectID, AL_EFFECT_TYPE, AL_EFFECT_EAXREVERB);
alEffectf(effectID, AL_EAXREVERB_DENSITY, netReverb.flDensity);
alEffectf(effectID, AL_EAXREVERB_DIFFUSION, netReverb.flDiffusion);
alEffectf(effectID, AL_EAXREVERB_GAIN, netReverb.flGain);
Sounds on the other side of a wall should be muffled. On startup I allocate 64 low pass filters of increasing gain, and use raytracing to determine how 'occluded' each sound source is. This occlusion value is used to select which filter which to apply to each sound source.
To start, if the listener has direct LOS with a sound source, then that sound source is 0% occluded and should have the weakest low pass filter applied (which has 100% AL_LOWPASS_GAIN
and AL_LOWPASS_GAINHF
, which is essentially no filter).
If not, we need to figure out how occluded the sound source is. Each sound source has an 'accessibility' value (opposite of occlusion), which accumulates as each ray achieves LOS with the sound source:
1.0
to accessibility
0.5
to accessibility
, and it keeps halving on each bounce.0.0
to accessibilityAfter casting 1000 rays, we'll have an accessibility
value between 0.0
and 1000.0
. If we map accessibility
linearly onto our 64 low pass filters - e.g. accessibility
of 500.0
would select the 50% strength low pass filter - then sounds will become quickly muffled around corners, e.g. an enemy walking away from you and around a corner.
This is very unnatural because someone who's just around a corner should still be clearly audible. To normalise this occlusion value, I manually tweaked and settled with this formula:
var occlusion = 1 - rayData.accessibility / TOTAL_RAY_COUNT;
var adjustedOcclusion = -MathF.Pow(2.718f, 20 * occlusion - 20) + 1;
This gives us this the below graph, where the horizontal axis is occlusion and the vertical access is the low pass filter gain. The low pass filter gain is close to 100% until occlusion reaches about 0.7
, and then it sharply drops.
Each ray's contribution to a sound source's accessibility
is stored in an array, so that the total accessibility
can be updated like a rolling average. This means we can cast a few rays each frame (on a background thread) to update each sound source's accessibility
in real time.
I also experimented with permeation, which counts the number of voxels between the sound source and the surface that each ray first bounces off. This was used to adjust the strength of the low pass filter, so that you could press your player up against a thin wall to better hear what's on the other side.
If the listener is in a room with one open window, then rain should sound like it's coming from that open window. So rather than playing rain as a global sound, I gave it a spatialised position in the world, and controlled its position based on the average direction of each ray that eventually reached the skybox. For example, if more rays that are fired to the left of the player eventually reach the skybox, then the rain sound should be positioned to the left of the player. If not many rays reach the skybox, then I move the rain sound source further away from the player to make it sound quieter.
For the reverb, you may also be able to integrate Eyring's reverberation time equation to dynamically estimate the room's decay time.
T60 = -0.161 * V / (S * ln(1 - a))
Where T60 is the time in seconds for the sound to decay by -60dB, V
is the total volume of the room in meters^3
, S
is the total surface area of the room in meters^2
, and a
is the average absorption coefficient of all the room surfaces (between 0 and 1, where 1 is no reflection (escapes the room and doesn't contribute anything), and 0 is fully reflective). The ray tests could give you information about the materials that make up the room, which can contain separate low-, mid-, and high-frequency absorption coefficients (or just low and high), that each are weighted/averaged and fed through the equation, then set the decay time properties (clamped as appropriate):
alEffectf(effectID, AL_EAXREVERB_DECAY_TIME, midT60);
alEffectf(effectID, AL_EAXREVERB_DECAY_HFRATIO, highT60 / midT60);
alEffectf(effectID, AL_EAXREVERB_DECAY_LFRATIO, lowT60 / midT60);
The primary issue would be calculating the room volume and surface area using the ray bounces. It can certainly be done, the trick would be to do it efficiently and accurately enough to not throw off the results too badly.
BTW you guys might've seen the GSound demo somewhere and someone in our server posted the SDK and source code (MIT variant?) here: https://drive.google.com/drive/folders/12ZoIT3fY5njvCEybba9tzbU1_Y-KrrgX 👀
* In order to perform sound propagation rendering, the supplied gsound::SoundPropagationRenderer * class can be used or another external renderer if so desired. The purpose of a sound propagation * renderer is to take the audio from each sound source and auralize it for a single listener * using the sound propagation paths generated in the propagation step. It is recommended that the * provided SoundPropagationRenderer be used rather than a pre-existing sound library such as * OpenAL, WWise, or FMOD. These libraries are not designed in order to perform the frequency-dependent * effects necessary for different SoundMaterial types and diffraction efficiently. However, * these libraries can still be used to provide audio data as input to the SoundPropagationRenderer * and can also be used to handle sending the output of the renderer to system audio devices * (though this functionality is already present in GSound).
kinda like the Ray Traced Global Illumination shader does with light.
That's kinda what x3daudio1_7_hrtf and dsoal already do, isn't it? There's just so much that you can hook an api, before the engine own internals start.
Anyhow, if this isn't to discuss some kind of ALC_EXT_WATETRACING
extension that could make this more of a walk in the park, I think we are digressing too much.
@mirh No? I'm not quite sure what you mean. They both use OpenAL Soft, at least partially, but they'd need some geometry information to simulate rays. So if an A3D wrapper isn't feasible or as compatible, a ReShade shader like DisplayDepth to just send the raw depth map to OpenAL Soft could allow muffling a sound if the expected path to it is occluded by geometry. e.g. a close sound coming from around the direction of this eagle wouldn't be occluded, but a far one coming from behind the character (or a wall in its place) would. Matching listener to game/real FOV and sounds outside of it would be tricky tho (maybe just add extra reverb or mirror/extrapolate space from the FOV to create a 360° depth buffer?).
Just out of curiosity, perhaps @BlueSkyDefender would have a better idea of how to send depth information to external software kinda like SuperDepth3D_VR does with the stereoscopic 3D views in the VR companion app 👀
@mirh No? I'm not quite sure what you mean. They both use OpenAL Soft, at least partially, but they'd need some geometry information to simulate rays. So if an A3D wrapper isn't feasible or as compatible, a ReShade shader like DisplayDepth to just send the raw depth map to OpenAL Soft could allow muffling a sound if the expected path to it is occluded by geometry. e.g. a close sound coming from around the direction of this eagle wouldn't be occluded, but a far one coming from behind the character (or a wall in its place) would. Matching listener to game/real FOV and sounds outside of it would be tricky tho (maybe just add extra reverb or mirror/extrapolate space from the FOV to create a 360° depth buffer?).
Just out of curiosity, perhaps @BlueSkyDefender would have a better idea of how to send depth information to external software kinda like SuperDepth3D_VR does with the stereoscopic 3D views in the VR companion app 👀
There are a few ways to do this.
You can make a special format to store depth in a away that has minimal effect on the viewer when they look at it. So basically encode depth in to the color image it self and decode it at your capture app.
This or just send the depth buffer directly from the said buffer from said app over a Pipe or something. The most simple way is do a form of 2D+Depth format and just use ReShade to do it.
The funny thing is that you can bounce rays off the depth buffer since this how my RadiantGI and Depth3D work. SuperDepth3D Is a Ray Marched 3D effect shader. This means you will have missing information just like both the shaders.
But, you can infer information from the depth buffer. Like Density, Distance, and ect. But, we shoot rays from the Cam out. So information from the back face needs to be inferred.
So as soon as the ray goes off screen you have decide what happens in that case.
Hi, and thanks for DSOAL. I think it would be useful to indicate in README.md the currently supported EAX 1-4 functionalities, or to indicate which are not or cannot be implemented due to the fact that there is no documentation about it. This is just an idea, but it would help to understand what is supported and what is not.