fumiama / Retrieval-based-Voice-Conversion-WebUI

Easily train a good VC model with voice data <= 10 mins!
GNU Affero General Public License v3.0
133 stars 19 forks source link

Question about how cross-fade works in Realtime (may be bug?) #71

Open Mojobones opened 3 months ago

Mojobones commented 3 months ago

Hello! I have a question about how cross-fade works for the realtime GUI. This is more a question that could become either a feature request or a bug.

Theoretically it should be blending the chunks into each other, which should have nearly no latency hit as the slider would presumably control just how much of each chunk it uses.

But it seems like cross-fade adds latency to realtime output in a flat addition. Ex, I measured my latency for multiple crossfade values, and a cross-fade of .15 has almost exactly .1ms more latency than .05 which is quite a bit for real time communication.

But I was measuring from a cold start, as in i'd not be speaking, then speak, and measure the delta between my real voice and the changed voice.

So my question is: is cross-fade attempting to process silence? Like, will it append (length of crossfade) milliseconds at the beginning of a converted chunk so that it can cross-fade that silence into the output? If so, is this intended? If not, what causes the extra latency from crossfade?

Thank you so much for any insight! ❤️

fumiama commented 3 months ago

is cross-fade attempting to process silence

Yes.

If so, is this intended?

No. The original author do this just because it's easy to implement.

This is a problem. We can fix it later by detecting silence and not process it.

TheTrustedComputer commented 3 months ago

I also noted that real-time inference uses the GPU even during silence; this honestly seems a waste of resources. I believe this is an opportunity for performance gains by implementing some sort of noise gate in a function handling "response threshold".