Closed flibitijibibo closed 6 years ago
FACT indev repository has been created with header file and stub functions made and ready to be filled in:
AudioEngine/SoundBank/WaveBank parsing has been ported to C in FACT. This should make it relatively easy for someone to fill in all the functions that don't have to do with Cue generation and management, and from there we get to start the fun of figuring out how Cues are actually put together (not even factoring in the sound mixer that pulls in samples from the Cue instances).
Did a bunch of work over the past couple days to get all the non-mixer stuff done in FACT. As a reminder (for those who only read these threads via e-mail) the repository is over here:
https://github.com/flibitijibibo/FACT
The non-mixer TODO items mostly have to do with 3D audio, stuff directly relating to the sound backend (such as AudioRenderer) and stuff that FNA doesn't care about (IXACTWave, Notifications, etc.). The only notable gap right now is the selection of Sounds and Variation table entries, but that partially has to do with it being something that the mixer itself will have to get involved with at some point, so for now I've left it alone.
So, "in" "theory", you can load engines/banks, parse all the data, and "play" Cues accurately. The next really big step is the mixer - a lot of why FACT has been quiet has been because I've been reading up on... well, how to do it correctly. Things we have to do in the audio output callback, organized by difficulty:
Easy:
Medium:
Hard:
lol Good Luck:
I'll probably do this in the exact order listed above, but if someone wants to take an early shot at the middle/bottom of the list that'd get a lot of the harder stuff out of the way very quickly while I focus on junk more closely related to XACT accuracy (none of which really apply to the hard stuff, other than maybe the reverb input parameters).
It turns out FNA actually cares deeply about IXACT3Wave... you know how in XACT there's "PlayWaveEvent"? Yeah. I am the absolute smartest.
https://github.com/flibitijibibo/FACT/commit/6cd39b38186e0411d77f461e62b8b4c30f5e10c2
This is kind of nice though because it draws a very clear line between the role SoundBank and WaveBank play in the mixer - this is ultimately how it will work:
void AudioCallback(void *userdata, void *stream, uint32_t len)
{
/* FACTContext also contains a SoundEffect manager */
FACTContext *ctx = (FACTContext*) userdata;
FACTAudioEngine *engine = ctx->engineList;
FACTSoundBank *sb;
FACTCue *cue;
FACTWaveBank *wb;
FACTWave *wave;
/* Update Engines */
while (engine != NULL)
{
/* Update SoundBanks */
sb = engine->sbList;
while (sb != NULL)
{
cue = sb->cueList;
while (cue != NULL)
{
/* This updates the Waves! */
FACT_INTERNAL_UpdateCue(cue);
cue = cue->next;
}
sb = sb->next;
}
/* Update WaveBanks */
wb = engine->wbList;
while (wb != NULL)
{
wave = wb->waveList;
while (wave != NULL)
{
/* This actually pushes the Waves! */
FACT_INTERNAL_MixWave(wave, stream, len);
wave = wave->next;
}
wb = wb->next;
}
engine = engine->next;
}
/* TODO: SoundEffect stuff */
}
In addition to making the mixer callback really simple it also solves the weird case where someone might exclusively use WaveBank without a SoundBank, which is supported by XACT but not XNA.
This separates the previous TODO list into separate categories:
Cue:
Easy:
Medium:
Wave:
Easy:
Medium:
Hard:
lol Good Luck:
Surprise surprise, Wave is a whole lot harder to do. So for now I'll probably be focusing on Cue. Since Wave is pretty much entirely separate from Cue someone could in theory work on Wave on their own without any serious risk of collisions...
There is now a test program to make sure your XACT data is still readable by FACT:
https://github.com/flibitijibibo/FACT/commit/f03d8c2d4f98b05cb72fb3db77a505f915ba11f4
So far I've only gone through easy data, will spend some time running this through all my existing XACT data over the next week. If I didn't work on your game and it uses XACT, give this test a try!
I've started work on what is currently called the FACT Auditioning Tool:
https://github.com/flibitijibibo/FACT/tree/master/tool
In short, it's just an application using SDL2/OpenGL/ImGui to give us the ability to work with XACT data more interactively, rather than just getting text dumps. The structure is pretty simple:
FACTTool_Update()
, where we can push the entire interface simply by making ImGui calls, storing XACT data and other miscellaneous blobs of data just above the function.I spent the afternoon on _main and _ui, all that's left is _fact and then this tool will actually have some use. For now we're just going to do the same as testparse.c and let people open up AudioEngines, SoundBanks and WaveBanks and view the struct data, but as we figure out things like the audio output we can start doing really useful stuff like being able to play individual Wave/Sound/Cue entries, as well as interact with global/instance variables and visualize variable values along RPC curves as we're listening to the output in real time. Having an isolated tool that we can use to compare output to the official XACT tools will make accuracy tests SO MUCH easier, and simplified testing is going to be really important while we're spending the next couple months reinventing the universe behind the scenes...
The actual UI has now been started! It looks like this right now:
Pretty cool, right? Certainly nicer than just reading a giant text dump.
With this, I'll be returning to Cue updates... I encourage XACT users with a C++ compiler to try this out, and report any issues they find with parsing. Parsing alone is a LOT of code, so I wouldn't be surprised if I managed to mess up the port in at least one place.
A quick update since it's been pretty quiet for a few weeks now...
I've started the work to actually wire up all the XACT goo to an SDL audio device! SDL audio is really simple, so most of the work has just been implementing things from the XACT API that SDL would like to have (device names is the most obvious example) and then prepping the audio callback. It works pretty much exactly like I said it would:
Of course, to test audio playback, you need audio... so I've shifted from Cue work to Wave work. The good news is that the Wave API is really simple, so all of the real work is in the mixer. It's really easy to test; basically you take a WaveBank and hit Play on the wave entry you want to hear. I added that process to FACTTool as a single button:
https://github.com/flibitijibibo/FACT/commit/570f4400f04bfc65a6636875b507d7240e558d05
You can spam it as much as you like, and if FACT is doing its job correctly the tool can easily clean it up by just iterating through the list of Waves you've generated.
The main reason I started working on all this stuff is because SDL just got a sexy new audio streaming API:
https://wiki.libsdl.org/Tutorials/AudioStream
This does pretty much everything I need it to do (resampling, bit conversion, channel mixing) and does it in a MUCH better API than the SDL_AudioCVT structure. With this, we have a whole lot less busy work in the mixer callback:
Uint8 *wavedata;
if (wave->format == MSADPCM)
{
wavedata = DecodeMSADPCM(
wb->entries[wave->index],
wave /* Stores offset, staging buffer/cache, etc. */
);
}
else
{
wavedata = wb->entries[wave->index] + wave->offset;
}
/* size is calculated based on len, pitch shift,
* and remaining samples in the wave
*/
SDL_AudioStreamPut(wave->cvt, wavedata, size);
/* Then we get the converted data and do all the horrible things to it */
int rec = SDL_AudioStreamGet(wave->cvt, wavedata, len);
Pretty much the only thing it doesn't do is pitch shifting, but that's not really something SDL needs to care about (or is it?).
Once Waves work it'll be a little bit easier to work on Cues since we can hear what's happening as we work on it. PlayWaveEvent is pretty simple, but all the other events can be really messy if you don't work with the active Waves properly (I learned that the hard way).
Lastly, I've updated the TODO list in the OP to be a little more verbose. I'm aiming to make the list as thorough as possible so people wanting to contribute don't have to feel like they're taking on a REALLY big task when, in reality, FACT just has lots of 1-afternooners in there.
The MSADPCM decoder is in. It's about as pretty as I was expecting it to be:
PCM reading also works, so now data should play back but it'll sound wrong because we're not resampling yet. I have a rough idea in code but it's not working yet, need to see if I'm just using SDL_AudioStream incorrectly...
I just integrated SDL_AudioStream into the mixer and it appears to work perfectly! It's not been thoroughly tested but seeing that it's already handling some of XACT's ridiculous sample rates I'm already really happy with it:
The two things that might go wrong:
If those two things work then we're basically done with pulling data from the WaveBank, and the rest will be volume/pitch/effects.
I did a little bit of work trying to get pitch shifting to work, but after a few failed experiments it seems it would be a lot easier if we could just use SDL_AudioStream to do pitch shifting as well. I've posted on SDL's Discourse page to get feedback on a possible API for this:
https://discourse.libsdl.org/t/audio-enhancement-sdl-setstreamsamplerate/23304
If we can get that then it saves us a LOT of trouble and gives us a bit more room to keep all the more serious optimizations (SSE, Neon, etc.) in SDL rather than FACT.
Once pitch shifting is done we'll have enough of a foundation to start wiring Cues to Waves, provided that INTERNAL_UpdateCue is ready for it.
I ended up re-doing the resampler entirely; it's now our own work being done in FACT. The good news is it works exactly how we need it to, the bad news is that right now it's a linear resampler (i.e. very lazy and pretty noisy):
A lot of this was done with the help of Chris Robinson of the OpenAL Soft project, so shoutout to Chris for all the solid advice!
Up next is something I thought I could put off but kind of can't: Dealing with wavedata channel counts vs. the output channel count. Should be easy to shove into an arbitrary 2-channel setup, but we'll have to take this more seriously when 5.1 audio starts coming in.
FACT now deals with stereo wavedata properly:
The way we deal with this is pretty simple, despite the code being a lot longer: Channel data is deinterleaved until we mix it into the output buffer, where we then mix the resample data into the correct channel (per Apply3D, Speaker Position, junk like that). This only really affects stereo Wave entries; the Mono decode paths are the same and all the data is stored exactly the same, but we now store it in the "left" buffer, only using the "right" buffer for true stereo data (i.e. a Wave with no Apply3D data).
For performance, the MSADPCM decoders aren't all that different since we were basically reading sample-by-sample anyway (since that's how MSADPCM kinda works), but the stereo PCM decoders are now a little slower as a result, as we have to de-interleave the data in the WaveBank and/or do the stereo-to-mono conversion as we read in the data. For anyone crazy enough to use uncompressed PCM16 stereo data in WaveBanks, the reads are going to be MUCH slower than before. Consider compressing the data or using mono buffers if you don't actually depend on the stereo mix.
Up next is loop points, since I'm knee-deep in these decoders anyway...
Loop points aren't done yet but in the meantime I got a lot of the busy work on 3D audio done:
https://github.com/flibitijibibo/FACT/blob/d3ac63069b1a8c35c9706d8ba2ff4a25179a0864/src/FACT.h#L625 https://github.com/flibitijibibo/FACT/blob/master/src/FACT_3DAudio.c
If anyone wants to figure out the magical numbers in X3DAUDIO_HANDLE that'd be great:
https://github.com/flibitijibibo/FACT/blob/d3ac63069b1a8c35c9706d8ba2ff4a25179a0864/src/FACT.h#L727
Loop points are done for PCM, MSADPCM isn't done because I got sick of worrying about the macro hell and decided to spend that time undoing all the macro stuff. Look at how pretty it is now!
Now that that's over with MSADPCM looping should be a little easier to do without destroying everything whenever I change something...
MSADPCM looping is done, so minus writing a better resampler we're done pulling in wave data. The mixer's remaining tasks involve everything before and after pulling in waves, including Cue updates, positional audio, and real-time effects (reverb, filters, things like that). That's probably the other I'll be doing it in, so if you want to take on the latter two tasks, those are going to be open for quite a while since Cues are pretty messy to implement.
Been a while since I updated this thread... a lot of my time has been spent reading the existing XACT implementation alongside the official XACT documentation. It's slowly starting to make sense, but I won't be surprised when this seriously starts breaking when we start doing full playthroughs.
Today I pushed a bunch of stuff to fill out the C#->C tasks, taking care of pretty much every TODO/FIXME that isn't going to take serious work and should replicate what we do in FNA now. All that's left is Cue, which requires a little more thought since it has to work well with FACT_INTERNAL_UpdateCue. I expect the Cue C#->C port and the Cue Update items to be finished at the same time.
While today's pushes took out a bunch of tickboxes, I added a couple more. Aside from Cues, this is what we've got:
I included various links for each task, in the hopes that it helps someone other than me if they choose to take those issues on.
Finally figured out where XACT reverb comes from:
http://www.princetondigital.com/products/rackmount.html
Soooo I guess there's not going to be much in the way of documentation for the algorithms used for this effect.
I just finished writing FACT#:
https://github.com/flibitijibibo/FACT/tree/master/csharp
Tomorrow I'm going to create the fact
branch in FNA's repository and start integrating FACT into the Audio namespace. SoundEffect and Song aren't in place yet but I'm now at a point where I feel more comfortable using real games instead of FACTTool to try things out. I don't expect the XNASound API to be very large though so I'm not too concerned about it.
Current concerns with FACT itself include Cue Wave updates, 3D audio, and fading.
FACT# is now integrated in the fact
branch:
https://github.com/FNA-XNA/FNA/tree/fact
Just to show how much nicer this already is, compare the current FNA Cue to the FNA Cue with FACT:
https://github.com/FNA-XNA/FNA/blob/1f330620fb09ce3ce59e444a1731bb7903a09d20/src/Audio/Cue.cs
https://github.com/FNA-XNA/FNA/blob/a1e73b0b62226bbfdad82c3f6ee1f362efae3712/src/Audio/Cue.cs
The ALDevice is still in the fact
branch so you can still use SoundEffect if you still need it for testing purposes.
Hey look, FACT kinda works!
I just ported over the Apply3D math from FNA into FACT:
This is pretty much all we had in FNA since OpenAL did the rest. The rest of our 3D audio stuff is going to be based on the DX SDK docs and written from scratch. There is this but I think we have to do something more involved for SetMatrixCoefficients:
https://github.com/FNA-XNA/FNA/blob/master/src/Audio/SoundEffectInstance.cs#L241
Relevant docs:
For anyone that likes doing trigonometry busy work, here's an easy TODO:
SetMatrixCoefficients is now implemented, meaning 3D mixing should now be possible once FACT3DAudioCalculate is implemented:
https://github.com/flibitijibibo/FACT/commit/defc600ec9d73d057a9d57dc006158faa00aec57
Once again here's the function if anyone really likes doing trig:
FACT's XNA Sound API is now stubbed in:
https://github.com/flibitijibibo/FACT/commit/87c33382582084372d716c5e2406848fabd30ca9 https://github.com/flibitijibibo/FACT/commit/cc43bbc7fd6897ffaef2bb42ed3924a2dd0caa58
As you might expect, it's not terribly complicated. Only two things are missing:
The latter was never right to begin with and I do not have any games using it (anymore) so that's not a big deal, but I'll have to figure out Microphone at some point. It's a whole separate code path so it's not quite as easy to throw in as everything else.
Did a little bit of work to isolate the resampler from the rest of the library - now all the work for writing a new resampler should be in FACT_internal alone, via these three spots:
You can do as much damage to these locations as you want and everything else should remain unaffected by it.
Keep up the good work @flibitijibibo! Love the detailed progress updates!
Just rewrote the FACT TODO list. It's now organized into these categories:
You'll notice that XACT3 is basically done, while XAudio2 is almost an entirely new category.
I wasn't sure if I would hit this wall, but in looking at the SoundEffect API very very closely it appears that it would in fact be significantly easier to just reimplement the XAudio2 spec and write a SoundEffect implementation on top of that. This is bad in the sense that XAudio2 is a MUCH larger API than SoundEffect, but it's good in a whole lot of ways, mainly centered around both accuracy and simplicity in how the higher-level APIs are going to be implemented.
In writing FACT from scratch it's become abundantly clear that most of XNA's audio quirks come from XAudio2's quirks; writing a sound mixer from scratch directly from the XACT and SoundEffect APIs is hard, but writing a sound mixer for XAudio2 is a little easier and writing XACT/SoundEffect reimplementations on top of XAudio2 is a lot easier. The APIs were extremely poorly written and a lot of the things you would think it could do are actually not on the table, and all those secret limitations come from very lazy implementations via XAudio2.
But hey, lazy implementation means lazy _re_implementation. So that's good for us!
I'm going to be knee-deep in the XAudio stuff but the way I've integrated the upcoming refactor should allow those still working on FACT stuff to keep working. The 3D audio stuff is here...
https://github.com/flibitijibibo/FACT/blob/master/src/F3DAudio.c
... and the resampler ugliness is still here:
So if you want to work on 3D or want to make a good resampler I encourage you to keep hacking on those files; I'm going to be wayyy over here so don't worry about it:
https://github.com/flibitijibibo/FACT/blob/master/src/FAudio.c
The surface-level XAudio2 APIs have been implemented and a new FAudio_Platform layer has been written. Up next is the mixer...
... which is kind of weird this time around as it should be easier but we have to deal with the "graph" system:
https://msdn.microsoft.com/en-us/library/windows/desktop/ee415741(v=vs.85).aspx
Simple in concept, until you get to the shitshow that is sample rate conversion:
https://msdn.microsoft.com/en-us/library/windows/desktop/ee415817(v=vs.85).aspx
We at least know the process that each voice type goes through as it's mixed...
https://msdn.microsoft.com/en-us/library/windows/desktop/ee415825(v=vs.85).aspx
... but I dunno wtf the idea is with getting submixes in the right order and storing resample data.
EDIT: I'm an idiot, there's an explicit parameter for submix priorities:
We have a little XNA hack that exposes the XACT volume distance fall off curves.
By setting the pVolumeCurve in https://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.x3daudio.x3daudio_emitter(v=vs.85).aspx
unsafe IntPtr PrepareEmitterFalloffCurve(AudioEmitter emitter, double soundZeroDistance, double soundZeroDistanceLevel, double soundMinDistance, double soundMinDistanceLevel, double soundMaxDistance, double soundMaxDistanceLevel, double soundInfDistance, double soundInfDistanceLevel, double curveFactor) {
if (ClrDetection.IsMono())
return IntPtr.Zero;
if (ClrDetection.Is64Bit())
return IntPtr.Zero;
var typeAudioEmitter = typeof(AudioEmitter);
var typeAudioEmitterEmitterData = typeAudioEmitter.GetField("emitterData", BindingFlags.NonPublic | BindingFlags.Instance);
var typeXactEmitterData = typeAudioEmitterEmitterData.FieldType;
var typeXactEmitterDataPVolumeCurve = typeXactEmitterData.GetField("pVolumeCurve", BindingFlags.NonPublic | BindingFlags.Instance);
var curve = new List<Pair<double, double>>();
curve.Add(new Pair<double, double>(soundZeroDistance, soundZeroDistanceLevel));
for (float l = 0; l <= 1.0f; l += 0.1f) {
var squaredDistanceRange = Math.Pow(l, curveFactor);
var distance = Extensions.Lerp(soundMinDistance, soundMaxDistance, squaredDistanceRange);
var level = Extensions.Lerp(soundMinDistanceLevel, soundMaxDistanceLevel, l);
curve.Add(new Pair<double, double>(distance, level));
}
curve.Add(new Pair<double, double>(soundInfDistance, soundInfDistanceLevel));
var volumeCurve = Marshal.AllocHGlobal(2 * 4 + curve.Count * 2 * 4);
var ramBlobp = volumeCurve.ToPointer();
((void**)ramBlobp)[0] = (int*)ramBlobp + 2;
((int*)ramBlobp)[1] = curve.Count;
for (var i = 0; i < curve.Count; ++i) {
((float*)ramBlobp)[2 + i * 2] = (float)curve[i].A;
((float*)ramBlobp)[3 + i * 2] = (float)curve[i].B;
}
var emitterData = typeAudioEmitterEmitterData.GetValue(emitter);
typeXactEmitterDataPVolumeCurve.SetValue(emitterData, volumeCurve);
typeAudioEmitterEmitterData.SetValue(emitter, emitterData);
return volumeCurve;
}
Seems like it's the struct data, we're kind of close in FACT-CS it turns out!
https://github.com/FNA-XNA/FNA/blob/fact/src/Audio/AudioEmitter.cs https://github.com/flibitijibibo/FACT/blob/a2ed6b5a17633503af61c35cd157d87ef437debf/csharp/FACT.cs#L981
Looks like we just have to name it emitterData
and not emitter
.
@bartwe hey there! 😃 What a small world!
Went ahead and pushed the name change, and did it for Listener as well because it's probably the same...?
https://github.com/FNA-XNA/FNA/commit/fd121cad8cbe504cc3b74dc764a9681cd6bd4c45
Got up to the point where the internal framework for mixing is done, once the decoder and resampler are dropped in I think FAudio will work...?
For reference, the old FACT mixer thread...
... vs the FAudio mixer thread:
https://github.com/flibitijibibo/FACT/blob/dfaff0103523163cbbdc95f184727eac9fd04f12/src/FAudio_platform_sdl2.c#L431 https://github.com/flibitijibibo/FACT/blob/dfaff0103523163cbbdc95f184727eac9fd04f12/src/FAudio_internal.c#L220
Yeah. Definitely nicer. I can only hope mixing/resampling is just as nice with the way buffers are laid out for us.
Quick note before I get lost in decoders again: The resampler we have to write for FACT only applies to source voices where pitch can change the step size. Per the XAudio2 spec...
A submix voice is used primarily for performance improvements and effects processing. You cannot submit data buffers directly to submix voices. It will not be audible unless you submit it to a mastering voice. You can use a submix voice to ensure that a particular set of voice data is converted to the same format and to have a particular effect chain processed on the collective result.
In addition to the actions performed by all voices, submix voices perform the following actions.
- A fixed-rate SRC runs on the voice's output, if necessary, to convert the audio to the sample rate expected by its destination voices.
- An optional state-variable filter can be used to color the sound in various ways. See IXAudio2Voice::SetFilterParameters.
- An optional filter can be applied to the voice's outputs. See IXAudio2Voice::SetOutputFilterParameters.
So for submixes we can actually use SDL_AudioStream again! Master also uses a fixed-rate SRC but for the SDL backend that's already dealt with in the library, so we don't have to worry about it.
Hoping to have sound coming out of FAudio by the end of today. Probably won't be good sound but maybe it'll be nice for matching sample rates.
FAudio produces audio output now! The test program is pretty small too, even with the ridiculous struct inits in the middle:
#include <SDL.h>
#include "src/FAudio.h"
SDL_sem *sem;
void TEST_StreamEnd(FAudioVoiceCallback *callback)
{
SDL_SemPost(sem);
}
int main(int argc, char **argv)
{
FAudio *audio;
FAudioBuffer buffer;
FAudioVoiceSends sends;
FAudioSendDescriptor send;
FAudioWaveFormatEx format;
FAudioSourceVoice *source;
FAudioVoiceCallback callback;
FAudioMasteringVoice *master;
uint8_t *data;
size_t len;
/* Needed by the callback */
sem = SDL_CreateSemaphore(0);
/* Create the engine and device */
FAudioCreate(
&audio,
0,
FAUDIO_DEFAULT_PROCESSOR
);
FAudio_CreateMasteringVoice(
audio,
&master,
FAUDIO_DEFAULT_CHANNELS,
44100 /* Should be FAUDIO_DEFAULT_SAMPLERATE but SDL...*/,
0,
0,
NULL
);
/* Read in the buffer data (raw PCM16 export from Audacity) */
data = (uint8_t*) SDL_LoadFile("test.raw", &len);
/* Extremely verbose parameter init */
send.Flags = 0;
send.pOutputVoice = master;
sends.SendCount = 1;
sends.pSends = &send;
format.wFormatTag = 1;
format.nChannels = 2;
format.nSamplesPerSec = 44100;
format.nAvgBytesPerSec = 44100 * 2;
format.nBlockAlign = 0;
format.wBitsPerSample = 16;
format.cbSize = 0;
buffer.Flags = FAUDIO_END_OF_STREAM;
buffer.AudioBytes = len;
buffer.pAudioData = data;
buffer.PlayBegin = 0;
buffer.PlayLength = len / sizeof(int16_t) / format.nChannels;
buffer.LoopBegin = 0;
buffer.LoopLength = 0;
buffer.LoopCount = 0;
buffer.pContext = NULL;
callback.OnBufferEnd = NULL;
callback.OnBufferStart = NULL;
callback.OnLoopEnd = NULL;
callback.OnStreamEnd = TEST_StreamEnd;
callback.OnVoiceError = NULL;
callback.OnVoiceProcessingPassEnd = NULL;
callback.OnVoiceProcessingPassStart = NULL;
/* Create the source, send it the PCM buffer */
FAudio_CreateSourceVoice(
audio,
&source,
&format,
0,
FAUDIO_DEFAULT_FREQ_RATIO,
&callback,
&sends,
NULL
);
FAudioSourceVoice_SubmitSourceBuffer(
source,
&buffer,
NULL
);
FAudioSourceVoice_Start(source, 0, 0);
/* Wait until the source is done */
SDL_SemWait(sem);
/* Clean up. We out. */
SDL_free(data);
SDL_DestroySemaphore(sem);
FAudio_StopEngine(audio);
FAudioVoice_DestroyVoice(source);
FAudioVoice_DestroyVoice(master);
FAudioDestroy(audio);
return 0;
}
Still need to bring over the MSADPCM decoder and work on source resampling, but this is turning out a lot nicer than I expected!
I just migrated the resampler over, and for some reason all the audible bugs are gone...?!
All the other files changed are mostly me just trying to not break the FACT mixer, but I could swear that this is basically a direct port of my old resampler and it sounds totally fine now. I'll... take it... I guess...
Once MSADPCM is migrated that should be enough to port FACT to FAudio. 3D is still kind of a mess but that's just me sucking at interleaved sample data, and 3D doesn't even matter until X3DAudio is done anyway.
MSADPCM decoders are in:
https://github.com/flibitijibibo/FACT/commit/9b1b78599bd984afdeca84ad3c6e85a1c4c4410a
Still need to do voice/channel/matrix volumes but that's coming later. First I'll start porting FACT to FAudio, since that's more pressing for cleaning up this disaster I've left in FACT while getting FAudio up and running. If someone wants to do volume/3D stuff in FAudio/F3DAudio you probably have a solid week before I start getting in your way.
Technically, .wav files which use MSADPCM can define additional AdaptCoeff_1,AdaptCoeff_2 pairs, along with an index it should use in the AdaptCoeff_1[] and AdaptCoeff_2[] tables. (You're not supposed to overwrite the first 7 entries, which have the defaults as defined in the decoder, and unclear what happens if you do it anyway.) This allows multiple DATA blocks within a .wav file which use different adaptation coefficient pairs for each one (I assume they all get played in sequence) as well as using custom adaptation coefficient pairs for each block
Does XACT/XAudio allow this as well? or does it force you to use one of the 7 built-in 'default' adaptation coefficient pairs?
XAudio2's API suggests that you can use custom pairs, but the documentation and content pipelines all say that it's not actually supported and you have to use the MSADPCM pairs or else it fails.
Sounds like it was only ever half-implemented, then. Might make for some interesting testing on the XACT libs to see if it even 'partially' works. (Edit: Even if nothing uses it, it probably couldn't hurt to properly implement it, anyway.)
The old FACT mixer is now gone, and FACTAudioEngine/FACTWave use FAudio instead. It mostly works, but surprise surprise, my resampler has some issues with some of the obnoxious sample rates from the WaveBanks. It seems like the fixed point math isn't doing its job and we're decoding too little/resampling too much; it sounds fine up until we go too far and the library crashes.
The resampling bits, as a reminder: https://github.com/flibitijibibo/FACT/blob/fba93b0a6bd898c280855afeed5b2063a3a9f126/src/FAudio_internal.c#L46 https://github.com/flibitijibibo/FACT/blob/fba93b0a6bd898c280855afeed5b2063a3a9f126/src/FAudio_internal.c#L77 https://github.com/flibitijibibo/FACT/blob/fba93b0a6bd898c280855afeed5b2063a3a9f126/src/FAudio_internal.c#L257
One thing I did NOT touch yet is Cues; that's a whole other pile of stuff that will potentially involve a submix and isn't really affected by the FAudio migration (yet). I also didn't mess with Wave state yet so there's probably some inaccuracies there too, but it's nothing terribly serious since FAudio handles that anyhow (until Cues get involved again, at least).
3D/Volumes are still open and, like I said, should be available to grab until the end of the week, when I'll hopefully be done with the resampler fixes and any new FAudio bits for FACT.
Turns out there's a glaring issue with the assumptions I make with output mixing, so I lied about 3D/Volumes being open, have to fix it now... feel free to keep poking at the resampler though.
Multichannel audio is fixed, please enjoy the wall of hardcoded default output matrix arrays:
https://github.com/flibitijibibo/FACT/commit/e512b3ae19a40c70195dfa4656f3ae23f4b8758a
Did a little bit of house cleaning and moved all the XACT file parsing to FACT_internal...
https://github.com/flibitijibibo/FACT/blob/master/src/FACT_internal.c
... then moved ALL the surface-level implementation to a single FACT.c:
https://github.com/flibitijibibo/FACT/blob/master/src/FACT.c
This is a lot nicer than you'd think; without the parsing FACT.c comes out to just over 1800 lines for AudioEngine, SoundBank, WaveBank, Wave, and Cue all in a single file.
Additionally, the project is now properly called FAudio, since there is now a whole lot more going on than just the XACT stuff. (Though XACT is still about half the LOC at the moment...)
Lastly, I've relicensed everything I own in FAudio under the zlib license. It's the license I like and should make it more appealing to anyone that just wants to use FAudio without the XACT stuff. The FACT portion of FAudio is still under Ms-PL, but after posting this I intend to write an e-mail to all of the contributors for both FNA and MonoGame to see if I can't get this all under zlib. Turns out there aren't that many contributors so this is more realistic than originally expected.
It's not getting all of FNA off Ms-PL, but it's a start!
Okay, so I finally think I'm done with both resampling and gapless looping. There's a TON of padding and staging buffers now (especially for MSADPCM) but it was the only way I could figure out how to keep it from going too far past the intended offset or the intended decode buffer size. Tomorrow I'll be back on XACT implementation stuff.
I'm thinking about putting up a bounty for X3DAudio as I'm starting to realize how bad I was at trig and I don't really think I can learn what I need in time for this to ship within a century. If anyone likes 3D math and is looking for a couple weekends of work, let me know.
We're now 99% of the way there for FAudio's license - all the FNA/MG sources used within FACT are now under zlib as well! All that's left is unxwb's license, and I've just contacted Luigi about this now. So close...!
FAudio is now 100% under the zlib license! Thanks to everyone who allowed us to relicense it. That's a load off my mind... now I can focus on code again :P
The FNA SoundEffect implementation has been redone to use FAudio, it's entirely untested and probably doesn't work but should be enough of a start for anyone that uses SoundEffect and not XACT.
SoundEffect, SoundEffectInstance, and DynamicSoundEffectInstance should work now. SubmitFloatBufferEXT does not work because I haven't decided whether to move FAudio decoding straight to a float cache rather than s16 cache or just convert the float buffers to s16 upon submission. Both are kind of nasty...
Currently the audio implementation is almost entirely written in C#, with system-level audio being pushed to OpenAL via the IALDevice API.
In addition to being slower than native code for the usual reasons, the abstraction layer and OpenAL dependency have become too much for FNA to accurately and reliably handle every possible case presented to us by the library of XNA games.
What we need to do is take the native XACT API and create a native library that reimplements the XACT runtime. In addition to porting our existing XACT runtime to native code, we'll be removing the IALDevice entirely and replacing it with our own sound mixer that pushes the final output stream to low-level audio APIs (for our current platforms this will be done via SDL audio).
Major benefits include the XACT runtime running primarily on the audio thread rather than the game thread, removing a dependency (a dependency with a non-permissive license!), XACT logic pushing directly to the sound mixer (rather than through the SoundEffect API), the ability to actually stream wavedata (dramatically reducing RAM use) the ability for non-XNA games to have an XACT runtime available for Linux and macOS, and having a significant chunk of code running natively rather than through the CLR.
The major TODO list looks like this:
XAudio2:
XACT3:
XNA:
Bonus Points:
Even though XACT was definitely written in C++ (as can be seen by how they implement the C API), we're going to try and keep it in pure C, to keep the dependency tree minimal. My expectation is that FACT's desktop version will only depend on SDL and SDL alone, so on Windows we won't depend on any MSVCRT libraries.