Open averas opened 4 years ago
It would require some rework, as the project heavily relies on WebAudio API built-in functionalities.
Basically, you'd have to replace getByteFrequencyData()
here to read from your stream and then adjust references to AudioContext and Analyser parameters, like fftSize
and sampleRate
(here and here) and frequencyBinCount
here.
I am also interested in using this library with an external FTT stream, but i see your previous comments point to an older version. Is this still a possibility, and could you advise on where to focus? I'm currently trying to hack my stream in by setting AudioMotionAnalyzer._fftData but having little luck getting anything drawn on the canvas.
@averas @popfendi The current version of audioMotion-analyzer gets FFT data from getFloatFrequencyData()
- values representing decibels are stored in a Float32Array
array. I suppose different FFT implementations may use other data formats.
I can add an option for a user callback where you could provide your own data, but I need to better understand what people are using so I can try and build a flexible solution.
Can you provide more info about the FFT you're using, and/or share some code that generates your data stream?
Hi @hvianna, In my case I have a IoT device producing an FFT stream as a float64 array. The aim was to consume the stream on the client and use audioMotion analyser to visualise it.
Would it be practically possible to use the analyzer/visualization with an already existing FFT stream, which by the way is not necessarily audio, but other spectrum data.
It seems like the AudioConcext and the build in FFT is an integral part of the library.