Open Rihnami opened 1 year ago
try lowvram 8 gbs is not that much when we are talking about this kind of application
also you can try on linux or update graphics drivers if not it might be a problem in torch you can try using 2 gpus at once might give more than 8 gb
try lowvram 8 gbs is not that much when we are talking about this kind of application
But on GTX1650 medvram works fine and there were no errors
yeah does not make sense this can be a real hard to solve issue can use verbose on it and send logs in pastebin
It seems to be related to the source image's dimensions not being a multiple of 4. In my case, changing the Width
and Height
parameters of Resize to
to one of the nearest multiple of 4 fixes the issue.
Source file dimensions | Resize to params | Result |
---|---|---|
700 x 1050 | 700 x 1050 | RAM Error |
700 x 1050 | 704 x 1048 | Works |
700 x 1050 | 696 x 1048 | RAM Error |
Happy calculating!
It seems to be related to the source image's dimensions not being a multiple of 4. In my case, changing the Width and Height parameters of Resize to to one of the nearest multiple of 4 fixes the issue.
To avoid problems, you should use multiples of 8.
2 gpus at once might give more than 8 gb
@coolst3r Can you point me in the direction of how to get torch to use two 8G GPUs at the same time on linux? Thank you!
It seems to be related to the source image's dimensions not being a multiple of 4. In my case, changing the Width and Height parameters of Resize to to one of the nearest multiple of 4 fixes the issue.
To avoid problems, you should use multiples of 8.
You're right, had a lapsus. Hence why it didn't work with every multiple of 4.
I have the same problem on a 2070 super, the only solution I found was restarting the program to clean the cache.
@CreativeSau how did you set the Resize to
option, I can't find it in the UI. Having the same CUDA out of memory issue, hopefully the multiple of 8 size helps
The smallest VRAM overhead is a multiple of 64. Multiples of 8 are all resolutions supported by WebUI.
Thanks @Sakura-Luna but I don't see any input panel for resize option, can you let me know how do I set input image width and height to multiple of 64?
Thanks @Sakura-Luna but I don't see any input panel for resize option, can you let me know how do I set input image width and height to multiple of 64?
Just enter the value directly into the corresponding text box.
2 gpus at once might give more than 8 gb
@coolst3r Can you point me in the direction of how to get torch to use two 8G GPUs at the same time on linux? Thank you!
you can just plug in 2 gpus in your motherboard
Ran into same error using UI v1.6.0 and loading 2 checkpoints at the same time. Failed after all sampling completed and just before sending final render to the UI. It appears that all my shared VRAM was used and CUDA needed 540MB which it tried to get from VRAM even though there was available RAM.
Fixed by checking the option: Only keep one model on device
why is called cs go 2 not just a release it as a update why add 2 to the name
Is there an existing issue for this?
What happened?
Change GTX 1650 to RTX 2060 Super and got this error
Steps to reproduce the problem
Try to generate image
What should have happened?
It must work
Commit where the problem happens
64fc936738d296f5eb2ff495006e298c2aeb51bf
What platforms do you use to access the UI ?
Windows
What browsers do you use to access the UI ?
No response
Command Line Arguments
List of extensions
a1111-sd-webui-locon LDSR Lora ScuNET SwinIR prompt-bracket-checker
Console logs
Additional information
New updates on torch 2.0 didn't work consistently and i back to 64fc936738d296f5eb2ff495006e298c2aeb51bf But after change GPU i got this error Once after restarting the PC it worked, but the next day it didn’t "Tried to allocate 3.33 GiB. 4.47 GiB free" What?