Closed chappjo closed 1 year ago
Ok, the upscale finished, and the output looks correct, so the hack fix does work.
Calling cuda directly makes it hard to switch to cpu or mps. I guess there is a general DEVICE variable set somewhere. Can we use that in those two instances?
I've created a new PR #5586 to address this.
Can you give it a try please?
The PR has been merged now. Please give it a test.
Is there an existing issue for this?
What happened?
I have a laptop without a discrete GPU, so I run the Web UI on my cpu using the command line argument "--use-cpu all". This works for Text2Image, Image2Image, ESRGAN, etc, but not for LDSR.
I have managed to fix this in a hacky way, and it no longer produces the error. I still need to wait for the upscale I'm running to finish, so I can check if the output is correct. On my cpu it takes nearly 2 hours to 4x upscale a 640x512 image :|
To fix this I opened
ldsr_model_arch.py
and:model.cuda()
tomodel.to(devices.cpu)
c = c.to(torch.device("cuda"))
toc = c.to(torch.device("cpu"))
from modules import devices as devices
(theas devices
is probably unnecessary)Obviously this is not a proper fix, but if anyone else is experiencing the same issue, then can use this hack.
Steps to reproduce the problem
What should have happened?
LDSR should work with "--use-cpu all" in the COMMANDLINE_ARGS
Commit where the problem happens
98947d173e3f1667eba29c904f681047dea9de90
What platforms do you use to access UI ?
Linux
What browsers do you use to access the UI ?
Mozilla Firefox
Command Line Arguments
Additional information, context and logs
Error completing request Arguments: (0, 0, <PIL.Image.Image image mode=RGB size=640x512 at 0x7F4A63EE66B0>, None, '', '', True, 0, 0, 0, 2, 512, 512, True, 3, 0, 1, False) {} Traceback (most recent call last): File "/home/pc/programs/linux/stable-diffusion-webui/modules/ui.py", line 185, in f res = list(func(*args, *kwargs)) File "/home/pc/programs/linux/stable-diffusion-webui/webui.py", line 54, in f res = func(args, *kwargs) File "/home/pc/programs/linux/stable-diffusion-webui/modules/extras.py", line 187, in run_extras image, info = op(image, info) File "/home/pc/programs/linux/stable-diffusion-webui/modules/extras.py", line 148, in run_upscalers_blend res = upscale(image, upscale_args) File "/home/pc/programs/linux/stable-diffusion-webui/modules/extras.py", line 116, in upscale res = upscaler.scaler.upscale(image, resize, upscaler.data_path) File "/home/pc/programs/linux/stable-diffusion-webui/modules/upscaler.py", line 64, in upscale img = self.do_upscale(img, selected_model) File "/home/pc/programs/linux/stable-diffusion-webui/modules/ldsr_model.py", line 54, in do_upscale return ldsr.super_resolution(img, ddim_steps, self.scale) File "/home/pc/programs/linux/stable-diffusion-webui/modules/ldsr_model_arch.py", line 87, in super_resolution model = self.load_model_from_config(half_attention) File "/home/pc/programs/linux/stable-diffusion-webui/modules/ldsr_model_arch.py", line 27, in load_model_from_config model.cuda() File "/home/pc/programs/linux/stable-diffusion-webui/venv/lib/python3.10/site-packages/pytorch_lightning/core/mixins/device_dtype_mixin.py", line 128, in cuda device = torch.device("cuda", torch.cuda.current_device()) File "/home/pc/programs/linux/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/init.py", line 552, in current_device _lazy_init() File "/home/pc/programs/linux/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/init.py", line 221, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled