AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
139.15k stars 26.41k forks source link

[Bug]: LDSR (Latent Diffusion Super Resolution) does not respect --use-cpu all #4762

Closed chappjo closed 1 year ago

chappjo commented 1 year ago

Is there an existing issue for this?

What happened?

I have a laptop without a discrete GPU, so I run the Web UI on my cpu using the command line argument "--use-cpu all". This works for Text2Image, Image2Image, ESRGAN, etc, but not for LDSR.

I have managed to fix this in a hacky way, and it no longer produces the error. I still need to wait for the upscale I'm running to finish, so I can check if the output is correct. On my cpu it takes nearly 2 hours to 4x upscale a 640x512 image :|

To fix this I opened ldsr_model_arch.py and:

  1. Changed line 27 of from model.cuda() to model.to(devices.cpu)
  2. Changed line 148 of from c = c.to(torch.device("cuda")) to c = c.to(torch.device("cpu"))
  3. Added new line from modules import devices as devices (the as devices is probably unnecessary)

Obviously this is not a proper fix, but if anyone else is experiencing the same issue, then can use this hack.

Steps to reproduce the problem

  1. Install CPU torch
  2. Add --use-cpu all to COMMANDLINE_ARGS
  3. Text2Image, Image2Image, ESRGAN, etc should work fine
  4. Trying to upscale with LDSR gives error "AssertionError: Torch not compiled with CUDA enabled"

What should have happened?

LDSR should work with "--use-cpu all" in the COMMANDLINE_ARGS

Commit where the problem happens

98947d173e3f1667eba29c904f681047dea9de90

What platforms do you use to access UI ?

Linux

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

--skip-torch-cuda-test --no-half --use-cpu all --medvram

Additional information, context and logs

Error completing request Arguments: (0, 0, <PIL.Image.Image image mode=RGB size=640x512 at 0x7F4A63EE66B0>, None, '', '', True, 0, 0, 0, 2, 512, 512, True, 3, 0, 1, False) {} Traceback (most recent call last): File "/home/pc/programs/linux/stable-diffusion-webui/modules/ui.py", line 185, in f res = list(func(*args, *kwargs)) File "/home/pc/programs/linux/stable-diffusion-webui/webui.py", line 54, in f res = func(args, *kwargs) File "/home/pc/programs/linux/stable-diffusion-webui/modules/extras.py", line 187, in run_extras image, info = op(image, info) File "/home/pc/programs/linux/stable-diffusion-webui/modules/extras.py", line 148, in run_upscalers_blend res = upscale(image, upscale_args) File "/home/pc/programs/linux/stable-diffusion-webui/modules/extras.py", line 116, in upscale res = upscaler.scaler.upscale(image, resize, upscaler.data_path) File "/home/pc/programs/linux/stable-diffusion-webui/modules/upscaler.py", line 64, in upscale img = self.do_upscale(img, selected_model) File "/home/pc/programs/linux/stable-diffusion-webui/modules/ldsr_model.py", line 54, in do_upscale return ldsr.super_resolution(img, ddim_steps, self.scale) File "/home/pc/programs/linux/stable-diffusion-webui/modules/ldsr_model_arch.py", line 87, in super_resolution model = self.load_model_from_config(half_attention) File "/home/pc/programs/linux/stable-diffusion-webui/modules/ldsr_model_arch.py", line 27, in load_model_from_config model.cuda() File "/home/pc/programs/linux/stable-diffusion-webui/venv/lib/python3.10/site-packages/pytorch_lightning/core/mixins/device_dtype_mixin.py", line 128, in cuda device = torch.device("cuda", torch.cuda.current_device()) File "/home/pc/programs/linux/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/init.py", line 552, in current_device _lazy_init() File "/home/pc/programs/linux/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/init.py", line 221, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

chappjo commented 1 year ago

Ok, the upscale finished, and the output looks correct, so the hack fix does work. 00003

krummrey commented 1 year ago

Calling cuda directly makes it hard to switch to cpu or mps. I guess there is a general DEVICE variable set somewhere. Can we use that in those two instances?

wywywywy commented 1 year ago

I've created a new PR #5586 to address this.

Can you give it a try please?

wywywywy commented 1 year ago

The PR has been merged now. Please give it a test.