Closed MikuAuahDark closed 1 year ago
https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/config.py#L129 You can try modifing self.gpu_mem <= 4 to self.gpu_mem <= 7 to enforce using low VRAM mode.
Thanks. The resulting audio has no significant noticeable quality loss too.
Are there any plans on making that as an configurable option?
Hello,
Is it possible to tune down the quality of the inference for less VRAM usage? I'm running RTX 3060 6GB and while I can train with 4 batches on V2 without problem, I can't inference certain audio files due to out-of-memory error. I don't think it's f0 prediciton issue because even with
pm
,harvest
, andcrepe
f0, out-of-memory error still occurs.Platform is Windows 11, running natively without WSL2.