Open rampartisan opened 8 years ago
If you think about the pipeline, you have input -> DSP -> output. The delay from input to output is exactly equal to the latency value, which determines your buffer size. In the read/write callback is exactly where you should put the DSP, and make sure that how long it will take to run is less than the latency.
Hey Andrew,
Thanks for your answer, I didnt want to start putting code in anywhere I thought was best! Now that I know this, I have thought of further questions. Apologies if these are too application specific (and unclear)!
My input is a 16khz, 4 channel microphone array. My beamforming algorithm combines these 4 channels to mono 16khz, but part of the processing requires FFT/IFFT of the signal and I would therefore like to be able to vary the amount of samples I process at one time.
In my code changing the latency value between device->latency_min and device->latency_max gives 5 and 3072 frames per read callback. Would an acceptable/reliable way of setting the amount of frames be setting this value with a formula like: Latency = desiredNumFrames/(sampleRate * bytesPerFrame)
In regards to 4ch->1ch, I am unsure wether the best way forward is to create a temporary data structure to save the output of soundio_instream_begin_read, process and then memcpy to a mono buffer OR to proceed as normal, saving the output of soundio_instream_begin_read to a SoundIoRingBuffer and going back to my previously mentioned idea of a processed buffer, that reads from the inputbuffer.
I am also going to have to deal with 16khz -> more standard (or actually accepted by my output device) sample rate at some point, but I think that if I have the 2 main "problems" above thought through there should be a good point to resample at.
Thanks again,
Hi All,
I have just got started creating a small clt for beamforming with the MS Kinect in C using libsoundio, following the example pieces of code (mainly the microphone example). I have set up my devices etc and have come up to the point of creating read/write callbacks.
Which is where my question pops up. I want to do some DSP before output (obviously), where would the best place to put this be - in one of the read/write callbacks or within a new "process" callback that I create (so an extra buffer as well?). The beamforming DSP I have prototyped in matlab should be run realtime and I am confident it can be, it can get quite computationally intensive though.
Any thoughts/tips would be great.
Thanks.