Closed Caffiendish closed 2 years ago
You've also mentioned that using USoundWave
as a base means you lose out on FFT analysis, but if you're already building a procedural sound at runtime, why not prebuild the FFT data, on an opt-in basis?
UAudioComponent
has functionality to get FFT data, which you can run on the tick. There could also be the option to not prebuild the FFT data, and just cache it as its' generated.
I think the best way to offer FFT would actually be to have a custom UAudioComponent
, and offer things like IsBeat
in there (as an event perhaps?), whilst storing "Cooked" FFT data in a largely default custom USoundWave
, just with the various GetFFT methods overriden.
Does this not make it much more compatible with the engine, and so, easier for users to just drop in place and utilise?
This also means we never have to actually work out why those procedural soundwaves never stop/destroy.
This makes sense, AudioSynesthesia has good functionality for pre-analysis of audio data and can be used in the AudioAnalysisTools plugin. Thanks for sharing!
I will do this in the future :)
I actually started on a Synesthesia runtime plugin the other day, but I've realised I don't need audio importing, or analysis, in my project right now.
So, I've dumped the EXTREMELY basic PoC to a branch, maybe you can utilise it.
There's a native plugin for KISS FFT that you might find interesting, and a number of built in analysers, which might be of interest.
Are these of any use? Personally, I'm interested in the
FFTPeakPitchDetector
, which can take a float buffer, and produce timestamps based on pitch ranges, I'm going to have to take a close look at that.