Open divid3byzer0 opened 2 weeks ago
@divid3byzer0 I noticed that this issue is a week old & is still open. Allow me to solve it for you.
Simply run this: python main.py --listen 0.0.0.0 --lowvram --preview-method auto --use-split-cross-attention
Instead of this: python main.py --listen 0.0.0.0
If you get "Killed" just before the end of the process...
Use this: --novram
Instead of this: --lowvram
Hopefully I solved your problem, that solved it for me! enjoy. ♥
I am trying to run the model on Docker (Docker Desktop, Windows via WSL2) and my card is a RTX 4070 12GB, but I always see the error "torch.cuda.OutOfMemoryError: Allocation on device" and the predictions although they say "suceeeded", there are no output files.
I am guessing that the minimum for this model is 16 GB of VRAM?