Open UXVirtual opened 8 years ago
I started integrating fftw in my fork if you'd like to test that. For the moment it will only compile on OSX and linux when fftw3 and portaudio are installed as shared libraries since the fftw3 stuff is missing in the binding.gyp for windows... but if I find some time I will try to add support for windows as well. If the integration is more stable I can make a pull request...
Any updates? Would be interested in using this...
I'm also looking to do something similar. I could always use an external library to perform the FFT on the buffer. But is this library a good way to listen for audio from the machine without breaking audio on the device? I would probably intercept the buffer, save it, then immediately return it, and then analyze the saved audio for visualization. Is there a way to get the buffers without having to be responsible for also returning them to the audio device?
I'm working on a script to analyse the audio being played to the user via speakers and control smartbulbs. Just wondering if there's a way to access the fftBuffer in the AudioEngine or is FFT not fully implemented yet? I'd like to be able to analyse different frequencies of the audio data in order to run different light sequences in response.
Alternately, is there a way I can utilize the
fft
class directly to the returned buffer in the callback function I'm passing toaddAudioCallback()
?