AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
143.11k stars 26.97k forks source link

[Bug]:error LDSR upscale #16325

Open quartollo77 opened 3 months ago

quartollo77 commented 3 months ago

Checklist

What happened?

there is a problem/error in upscaling with LDSR it crashes and does not upscale

message error is: RuntimeError: mat1 and mat2 must have the same dtype, but got Half and Float

I know I can fix with --no-half but the generation of images becomes very slow

before the webui 1.10 update everything was working fine

Steps to reproduce the problem

1 go to extras menu 2 try to make LDSR upscale

What should have happened?

LDSR upscale it crashes and does not upscale

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

sysinfo-2024-08-04-09-21.json

Console logs

Plotting: Restored training weights
*** Error completing request
*** Arguments: ('task(tanocltnxrd3ayi)', 0.0, <PIL.Image.Image image mode=RGBA size=253x238 at 0x1B36D57D0C0>, None, '', '', True, True, 0.0, 4, 0.0, 512, 512, True, 'LDSR', 'None', 0, False, 1, False, 1, 0, False, 0.5, 0.2, False, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1, False, ['Horizontal'], False, ['Deepbooru'], False, 'None', False, False, 240, 10, 10) {}
    Traceback (most recent call last):
      File "C:\automa\sd.webui\webui\modules\call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
      File "C:\automa\sd.webui\webui\modules\call_queue.py", line 53, in f
        res = func(*args, **kwargs)
      File "C:\automa\sd.webui\webui\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "C:\automa\sd.webui\webui\modules\postprocessing.py", line 133, in run_postprocessing_webui
        return run_postprocessing(*args, **kwargs)
      File "C:\automa\sd.webui\webui\modules\postprocessing.py", line 73, in run_postprocessing
        scripts.scripts_postproc.run(initial_pp, args)
      File "C:\automa\sd.webui\webui\modules\scripts_postprocessing.py", line 198, in run
        script.process(single_image, **process_args)
      File "C:\automa\sd.webui\webui\scripts\postprocessing_upscale.py", line 152, in process
        upscaled_image = self.upscale(pp.image, pp.info, upscaler1, upscale_mode, upscale_by, max_side_length, upscale_to_width, upscale_to_height, upscale_crop)
      File "C:\automa\sd.webui\webui\scripts\postprocessing_upscale.py", line 107, in upscale
        image = upscaler.scaler.upscale(image, upscale_by, upscaler.data_path)
      File "C:\automa\sd.webui\webui\modules\upscaler.py", line 68, in upscale
        img = self.do_upscale(img, selected_model)
      File "C:\automa\sd.webui\webui\extensions-builtin\LDSR\scripts\ldsr_model.py", line 58, in do_upscale
        return ldsr.super_resolution(img, ddim_steps, self.scale)
      File "C:\automa\sd.webui\webui\extensions-builtin\LDSR\ldsr_model_arch.py", line 137, in super_resolution
        logs = self.run(model["model"], im_padded, diffusion_steps, eta)
      File "C:\automa\sd.webui\webui\extensions-builtin\LDSR\ldsr_model_arch.py", line 96, in run
        logs = make_convolutional_sample(example, model,
      File "C:\automa\sd.webui\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\automa\sd.webui\webui\extensions-builtin\LDSR\ldsr_model_arch.py", line 228, in make_convolutional_sample
        sample, intermediates = convsample_ddim(model, c, steps=custom_steps, shape=z.shape,
      File "C:\automa\sd.webui\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\automa\sd.webui\webui\extensions-builtin\LDSR\ldsr_model_arch.py", line 184, in convsample_ddim
        samples, intermediates = ddim.sample(steps, batch_size=bs, shape=shape, conditioning=cond, callback=callback,
      File "C:\automa\sd.webui\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\automa\sd.webui\webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 104, in sample
        samples, intermediates = self.ddim_sampling(conditioning, size,
      File "C:\automa\sd.webui\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\automa\sd.webui\webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 164, in ddim_sampling
        outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
      File "C:\automa\sd.webui\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\automa\sd.webui\webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 189, in p_sample_ddim
        model_output = self.model.apply_model(x, t, c)
      File "C:\automa\sd.webui\webui\extensions-builtin\LDSR\sd_hijack_ddpm_v1.py", line 964, in apply_model
        output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])]
      File "C:\automa\sd.webui\webui\extensions-builtin\LDSR\sd_hijack_ddpm_v1.py", line 964, in <listcomp>
        output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])]
      File "C:\automa\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\automa\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\automa\sd.webui\webui\extensions-builtin\LDSR\sd_hijack_ddpm_v1.py", line 1400, in forward
        out = self.diffusion_model(xc, t)
      File "C:\automa\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\automa\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\automa\sd.webui\webui\modules\sd_unet.py", line 91, in UNetModel_forward
        return original_forward(self, x, timesteps, context, *args, **kwargs)
      File "C:\automa\sd.webui\webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 789, in forward
        emb = self.time_embed(t_emb)
      File "C:\automa\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\automa\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\automa\sd.webui\system\python\lib\site-packages\torch\nn\modules\container.py", line 215, in forward
        input = module(input)
      File "C:\automa\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\automa\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\automa\sd.webui\webui\extensions-builtin\Lora\networks.py", line 584, in network_Linear_forward
        return originals.Linear_forward(self, input)
      File "C:\automa\sd.webui\system\python\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
        return F.linear(input, self.weight, self.bias)
    RuntimeError: mat1 and mat2 must have the same dtype, but got Half and Float

Additional information

No response

light-and-ray commented 3 months ago

Yes, it does, but nobody from devs wants to fix it. And it's reasonable, ldsr is very slow (100 steps), and quality is not better then stablesr for example. Which works extremely fast with sd2 turbo. Also DAT and HAT upscalers are very good, but they have not diffusion

quartollo77 commented 3 months ago

thanks for the answer, it's true LDSR is very slow but it was very useful and quality in the upscaling for small portions of image like in the faceswap. anyway there will be other solutions for quality upscaling... I will follow yours suggestions...

BrutolocoW commented 1 month ago

I'm having the same problem.

LDSR is not working on extras tab, but it works in txt2img tab, which cannot make it usable for any image.

lukemoore66 commented 1 month ago

This seems to be very simple fix, as it's just a casting issue. Just replace line 137 of ./extensions-builtin/LDSR/ldsr_model_arch.py:

logs = self.run(model["model"], im_padded, diffusion_steps, eta)

with:

        with devices.autocast():
            logs = self.run(model["model"], im_padded, diffusion_steps, eta)

Be sure to keep the correct indenting.

I'm not too familiar with the codebase, nor am I developer on this project, but it works for me in all scenarios. Hopefully someone with more experience will see it and make a proper fix.

BrutolocoW commented 1 month ago

This seems to be very simple fix, as it's just a casting issue. Just replace line 137 of ./extensions-builtin/LDSR/ldsr_model_arch.py:

logs = self.run(model["model"], im_padded, diffusion_steps, eta)

with:

        with devices.autocast():
            logs = self.run(model["model"], im_padded, diffusion_steps, eta)

Be sure to keep the correct indenting.

I'm not too familiar with the codebase, nor am I developer on this project, but it works for me in all scenarios. Hopefully someone with more experience will see it and make a proper fix.

I tested it. That change made it work in my installation. Thanks you