Open trackme518 opened 9 months ago
Hello,
re-using one AudioSample
(acting as a fixed length buffer) that is looping forever like in your code above seems like a good idea. If you are continuously writing new data into it then the sample is basically acting as a circular buffer, so you can just copy the data over into the AudioSample
in the same consecutive order that it comes in. The target index you are writing to simply has to wrap over at the length of the AudioSample
.
If your incoming data is perfectly in sync with the framerate of the synthesis engine that should be it, but I suspect that there might be pacing issues. If you want to make sure that you are not currently copying data over too fast or too slow, or to delay a write that would affect an area that is currently being played back, you can use AudioSample
's positionFrame()
method which tells you where in the buffer the playhead currently is. A simple strategy I could imagine is having a 2048 frame buffer, you wait for the playhead to enter the second half of the buffer, then copy 1024 new frames into the first half, once the playhead reaches the end of the buffer and jumps back to the very beginning you copy 1024 new frames in the second half, and so on ad infinitum.
I don't know where in your Processing sketch you are writing to the AudioSample
but since the execution frequency of any piece of sketch code is not super high I can imagine that you might want to use a buffer much larger than 1024 samples.
As for the range of values it depends on your desired amplitude, but the web reference example for AudioSample's write()
method uses values in the [-100, 100] range.
A potentially much faster and cleaner way to write directly to an output buffer would be to bypass the Sound library classes and implement your own JSyn UnitGenerator
whose generate()
method consumes the data from your stream. There is a simple example of a custom UnitGen in the JSyn docs which warns of doing any complex I/O in the synthesis thread, so it really depends on where your data is coming from if this is a feasible route for you or not.
Hope that helps!
Hi, I am wondering what would be the best way to write directly into output buffer - I would like to receive audio from stream as float or byte array and I need to continuously write into the speakers. The only public facing object that seems good for this is AudioSample with write method. But I am not sure how to time the writing into it. Any tips welcomed.
What is the expected range of the audio data written (it assumes float in the source code...)?
Should I create always new AudioSample object when I have new data? Or should I rewrite the data in existing sample object...