Closed bmc84 closed 2 years ago
Hey, the correct fix is to reduce the batch size. I've gotten enough questions about this that I've implemented an automatic batch size adjustment mechanism that works based on your available GPU memory. Please pull the latest version from the "main" branch and give it a try.
Thanks for the quick response & fix to get it working again.
It's working now, but there's a very large performance hit. What's the cause of this? Previously a sentence would generate in <3 minutes (using whatever the default batch was previously, I never changed it) and "standard". Now the exact same sentence + voice, using standard & 1 candidate, is taking around 10 minutes. Is this performance increase expected with whatever has changed.. ?
I expect a small performance hit, like 10%, but nothing like that. It does not line up with what i am seeing but this might be caused by your computer having to hit the gc too much and thrashing.
Rendering time is highly dependent on the length of text you provide. Are you just feeding in a longer prompt? That might also explain why you saw the crash suddenly.
If you have some time and want to help me with this, can you pull the v2.2 release and clock the exact rendering time for the same phrase with both releases and post it here?
Hi,
I'm raising an issue for "RuntimeError: CUDA out of memory" because, since upgrading to the latest version, this is what happens when using the exact same commands which have executed successfully in the previous version 😥
This error happens with any combination such as preset = standard / candidates = 3 preset = fast / candidates = 1
I've ensured I'm trying to run this under identical settings to when it did work (ie nothing else open, nothing hogging GPU RAM). I just can't get this to work any more since the upgrade, where it used to work on my RTX 3070 using the 'standard' setting in Windows 10.
Any assistance would be very much appreciated. The full error is below (this was with fast/1 candidate).