Open shanhaidexiamo opened 2 months ago
this is intened, maybe due to libtorch performance when input chunk size changes
this is intened, maybe due to libtorch performance when input chunk size changes So you mean that the volume fluctuation is intended? And how long did it take for the first chunk in your experiment? Thank you
This issue is stale because it has been open for 30 days with no activity.
Hi, I'm trying your new code about streaming inference based on webui.py. I run this demo on A10, and I found the rtf of the first chunk is very high, need to wait 4-5 seconds to get the first yield and all settings are set by default. So I'm not sure how long it takes for the first chunk of the streaming inference you tested here?
My second problem is, the audio volume of streaming inference fluctuates, while non-streaming does not have this problem. Do you also have the same issue?
Thank you