Open teadrinker opened 6 months ago
A simple solution would just be to pull all samples from the Project Sound Track, and keep them globally. Downsides:
Interesting. I was using a similar approach by serializing the result of fft as json. I'm honestly not sure if processing the waveform directly on the flight would be fast enough in c#. But I'm not so much into audio. Maybe @HolgerFoerterer has an idea how to do this.
To get sample-precise output for video rendering, I spent a lot of time trying to convince Bass to switch from a real-time-based-approach to another mode where I could get access to buffered data in a consistent way. To be honest I failed too. Whatever I did... whenever I repositioned the playback in any way... things screwed up. So at the moment, I position the playback at the very beginning of the recording and avoid repositioning during the render.
So yes, you should theoretically be able to obtain buffered data by using a comparable approach. At least for FFT data there is a flag to fill the FFT without consuming new data.
But in a real-time scenario, don't expect buffered data to be exactly the same every time. Bass will apparently playback faster/slower and even skip as it sees fit to keep sync. And I don't know how to align that data then. When we get new data, it's obviously current, but that seems to be all we know.
And to answer the question by @pixtur: C# should be able to handle manual processing of stereo samples at 44.1-48 kHz easily.
whenever I repositioned the playback in any way... things screwed up
I suspect it might not be possible with the current API due to the nature of sound running in another thread. You'd need a function that give you the data AND the position in the same API call.... otherwise there is no guarantee they would be in sync.
I thought it would be nice to have access to the waveform, both for processing in order to generate sync events, and for display/feed waveform data into 3D points and other fun stuff... However I failed. (this was around Sept/Oct 2023)
I suspect Bass library might not be able to do this properly. You can pull waveform data, however you can not align the data in relation to the previous data you pulled.
But it might also just be I totally messed up somewhere...
I will dump the code here in case it might be useful:
Core/Audio/AudioAnalysis.cs
in Core/Audio/AudioEngine.cs, I created
UpdateWaveBuffer
, which is called right afterUpdateFftBuffer
and uses the same args: