AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
143.77k stars 27.04k forks source link

[Bug]: sd_model.model.diffusion_model.dtype for SDXL still reports float when using --precison half #16221

Open feffy380 opened 4 months ago

feffy380 commented 4 months ago

Checklist

What happened?

sd_models_xl.extend_sdxl() adds a shared.sd_model.model.diffusion_model.dtype attribute to SDXL models, which is not updated after casting to float16 when using --precision half. Anything relying on this attribute to determine the dtype of SDXL models will see float instead of float16. SD1.5 models are unaffected from what I've seen.

I know of at least one extension that checks this attribute to determine the unet's dtype and is broken due to the misreported dtype: https://github.com/aria1th/sd-webui-deepcache-standalone/issues/9 Hardcoding the extension to use float16 or using devices.dtype_unet effectively works around the bug.

Steps to reproduce the problem

  1. Run webui with --precision half
  2. Load any SDXL model.
  3. Attempt to use DeepCache with Refreshes caches when step is divisible by number > 1
  4. Exception due to extension expecting float based on misreported dtype but receiving float16 instead

What should have happened?

Extension runs without crashing.

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

sysinfo-2024-07-18-14-49.json

Console logs

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################

################################################################
Running on hope user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
python venv already activate or run without venv: /home/hope/src/sd/stable-diffusion-webui/venv
################################################################

################################################################
Launching launch.py...
################################################################
glibc version is 2.39
Check TCMalloc: libtcmalloc_minimal.so.4
libtcmalloc_minimal.so.4 is linked with libc.so,execute LD_PRELOAD=/usr/lib/libtcmalloc_minimal.so.4
Python 3.11.9 (main, Apr 30 2024, 07:54:26) [GCC 13.2.1 20240417]
Version: v1.9.4-168-ge5dfc253
Commit hash: e5dfc2539efe017106c0539b12247cae45e9bb99
Launching Web UI with arguments: --api --flash-attn --precision half
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
ldm/sgm GroupNorm32 replaced with normal torch.nn.GroupNorm due to `--precision half`.
Loading weights [461c3bbd5c] from /home/hope/src/sd/stable-diffusion-webui/models/Stable-diffusion/SeaArtFurryXL1.0.safetensors
Creating model from config: /home/hope/src/sd/stable-diffusion-webui/repositories/generative-models/configs/inference/sd_xl_base.yaml
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 6.4s (prepare environment: 1.1s, import torch: 2.3s, import gradio: 0.4s, setup paths: 0.7s, other imports: 0.3s, load scripts: 0.6s, create ui: 0.6s, add APIs: 0.3s).
Loading VAE weights from user metadata: /home/hope/src/sd/stable-diffusion-webui/models/VAE/sdxl-vae-fp16-fix.safetensors
Applying attention optimization: flash_attn... done.
Textual inversion embeddings loaded(3): feffyxl1, feffyxl2, feffyxl3
Textual inversion embeddings skipped(4): boring_e621_fluffyrock_v4, boring_e621_unbound_lite, boring_e621_unbound_plus, detailed_e621
Model loaded in 4.5s (load weights from disk: 0.3s, create model: 0.7s, apply weights to model: 3.0s, calculate empty prompt: 0.2s).
  0%|                                                                                                                                                                                                                             | 0/20 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(e0wcb11dlj6by43)', <gradio.routes.Request object at 0x781745ecf190>, '', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, '<p style="margin-bottom:0.75em">Keyframe Format: <br>Seed | Prompt or just Prompt</p>', '', 25, True, 5.0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "/home/hope/src/sd/stable-diffusion-webui/modules/call_queue.py", line 58, in f
        res = list(func(*args, **kwargs))
                   ^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/modules/call_queue.py", line 37, in f
        res = func(*args, **kwargs)
              ^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/modules/processing.py", line 847, in process_images
        res = process_images_inner(p)
              ^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/modules/processing.py", line 984, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/modules/processing.py", line 1342, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 218, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/modules/sd_samplers_common.py", line 272, in launch_sampling
        return func()
               ^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 218, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
                                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/modules/sd_samplers_cfg_denoiser.py", line 244, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/modules/sd_models_xl.py", line 43, in apply_model
        return self.model(x, t, cond)
               ^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 22, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
                                                                     ^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 34, in __call__
        return self.__sub_func(self.__orig_func, *args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/modules/sd_hijack_unet.py", line 50, in apply_model
        result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/wrappers.py", line 28, in forward
        return self.diffusion_model(
               ^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/extensions/sd-webui-deepcache-standalone/deepcache.py", line 126, in hijacked_unet_forward
        emb = unet.time_embed(t_emb)
              ^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/container.py", line 217, in forward
        input = module(input)
                ^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 527, in network_Linear_forward
        return originals.Linear_forward(self, input)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/linear.py", line 116, in forward
        return F.linear(input, self.weight, self.bias)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    RuntimeError: mat1 and mat2 must have the same dtype, but got Float and Half

---

Additional information

No response

viking1304 commented 4 months ago

Are you sure that is the real reason for your problem?

I see the notice below on this page.

Note : this does not work with ControlNet or UNet-forward-hijacking Extensions!!

and I see this in your error log

File "/home/hope/src/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 59, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)

What happens if you disable control or remove it?

p.s. I might be wrong. I am just stating what I see.

feffy380 commented 4 months ago

I'm not using controlnet. That's the line that falls back to the original processing function if the checkbox is disabled. The error only occurs when I use --precision half.

I already pointed out the cause of the error (webui reporting the wrong dtype for the unet) and that setting the correct dtype resolves the issue. If I knew where it gets set I'd submit a PR myself

viking1304 commented 4 months ago

As I already wrote, I am not saying you are wrong about the root cause. I am just pointing out that based on your log, you are using ControlNet.

If I were you, I would temporarily remove the ControlNet extension and double-check if I get the same error without it. The error log should be slightly different without the ControlNet extension since you will not have that line I pulled out from your log.

We need to wait for the developers to get a proper answer.