Closed c2727c closed 1 year ago
Hello! The answer to your question is fifty-fifty.
Any value will not be erroneous, but there is a caveat: large values โโof frames_per_buffer
can contribute to the appearance of a delay (and use more memory), and on the other hand, smaller values โโโโhave a bad effect on performance (because the callback
is called more often)
I would say that it is better to take a value of 512
or greater, but again, this is at the discretion of the developer.
You see audio acceleration for one of two reasons: the performance of the device is not enough (so some frames are skipped), or you get a single-channel recording and save it as a two-channel one.
Given that the increase of frames_per_buffer
helped - you have the first case.
single frame size = sample width * number of channels (bytes)
Hello, I was trying out the example pawp_record_wasapi_loopback.py. However the recored audio was extremely accelerated. I managed to solve the issue by setting frames_per_buffer=pyaudio.get_sample_size(pyaudio.paInt16)*1024 instead of pyaudio.get_sample_size(pyaudio.paInt16).
Could the frames_per_buffer in the example be wrong?