lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.8k stars 186 forks source link

[Bug]: Error: Input type (float) and bias type (struct c10::Half) should be the same. #168

Open iMiKED opened 1 year ago

iMiKED commented 1 year ago

Is there an existing issue for this?

What happened?

Error Error: Input type (float) and bias type (struct c10::Half) should be the same. Check your schedules/ init values please. Also make sure you don't have a backwards slash in any of your PATHs - use / instead of \. appears

Steps to reproduce the problem

  1. Install latest Deforum
  2. Sampler: DPM++ 2M Karras
  3. Keyframes: 3D
  4. Init: ImageInit - Use init 'source.png' which is placed in StableDiffusion folder
  5. Output: upscale x3
  6. Press Generate and several time from error appears

What should have happened?

Generation is in progress

Commit where the problem happens

version:  •  python: 3.10.6  •  torch: 2.0.0+cpu  •  xformers: N/A  •  gradio: 3.31.0  •  checkpoint: 6ce0161689

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

AMD GPUs (RX 5000 below)

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

set COMMANDLINE_ARGS=--autolaunch

List of extensions

deforum-for-automatic1111-webui https://github.com/deforum-art/deforum-for-automatic1111-webui.git automatic1111-webui b58056f9 Tue Jun 6 21:47:35 2023 latest sd-webui-controlnet https://github.com/Mikubill/sd-webui-controlnet.git main c9c7317c Thu Jun 15 01:33:20 2023 latest LDSR built-in None Sat Jun 17 13:25:19 2023
Lora built-in None Sat Jun 17 13:25:19 2023
ScuNET built-in None Sat Jun 17 13:25:19 2023
SwinIR built-in None Sat Jun 17 13:25:19 2023
prompt-bracket-checker built-in None Sat Jun 17 13:25:19 2023

Console logs

`Deforum extension for auto1111 webui, v2.4b
Git commit: b58056f9
Saving animation frames to:
c:\Downloads\StableDiffusion\stable-diffusion-webui-directml\outputs/img2img-images\Deforum_20230617202456
Loading MiDaS model from dpt_large-midas-2f21e586.pt...
Animation frame: 0/120
Seed: 3907518464
Prompt: awesome 25 year old girl, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera
╭─────┬───┬───────┬────┬────┬────┬────┬────┬────╮
│Steps│CFG│Denoise│Tr X│Tr Y│Tr Z│Ro X│Ro Y│Ro Z│
├─────┼───┼───────┼────┼────┼────┼────┼────┼────┤
│ 25  │7.0│  0.2  │ 0  │ 0  │1.75│ 0  │ 0  │ 0  │
╰─────┴───┴───────┴────┴────┴────┴────┴────┴────╯
100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:07<00:00,  1.57s/it]
Animation frame: 2/120                                                                 | 5/540 [00:06<12:11,  1.37s/it]
Creating in-between cadence frame: 0; tween:0.50;

*START OF TRACEBACK*
Traceback (most recent call last):
  File "C:\Downloads\StableDiffusion\stable-diffusion-webui-directml\extensions\deforum-for-automatic1111-webui\scripts\deforum_helpers\run_deforum.py", line 78, in run_deforum
    render_animation(args, anim_args, video_args, parseq_args, loop_args, controlnet_args, root)
  File "C:\Downloads\StableDiffusion\stable-diffusion-webui-directml\extensions\deforum-for-automatic1111-webui\scripts\deforum_helpers\render.py", line 299, in render_animation
    depth = depth_model.predict(turbo_next_image, anim_args.midas_weight, root.half_precision)
  File "C:\Downloads\StableDiffusion\stable-diffusion-webui-directml\extensions\deforum-for-automatic1111-webui\scripts\deforum_helpers\depth.py", line 88, in predict
    depth_tensor = self.midas_depth.predict(prev_img_cv2, half_precision)
  File "C:\Downloads\StableDiffusion\stable-diffusion-webui-directml\extensions\deforum-for-automatic1111-webui\scripts\deforum_helpers\depth_midas.py", line 60, in predict
    midas_depth = self.midas_model.forward(sample)
  File "C:\Downloads\StableDiffusion\stable-diffusion-webui-directml\extensions\deforum-for-automatic1111-webui\scripts\deforum_helpers\src\midas\dpt_depth.py", line 166, in forward
    return super().forward(x).squeeze(dim=1)
  File "C:\Downloads\StableDiffusion\stable-diffusion-webui-directml\extensions\deforum-for-automatic1111-webui\scripts\deforum_helpers\src\midas\dpt_depth.py", line 114, in forward
    layers = self.forward_transformer(self.pretrained, x)
  File "C:\Downloads\StableDiffusion\stable-diffusion-webui-directml\extensions\deforum-for-automatic1111-webui\scripts\deforum_helpers\src\midas\backbones\vit.py", line 13, in forward_vit
    return forward_adapted_unflatten(pretrained, x, "forward_flex")
  File "C:\Downloads\StableDiffusion\stable-diffusion-webui-directml\extensions\deforum-for-automatic1111-webui\scripts\deforum_helpers\src\midas\backbones\utils.py", line 86, in forward_adapted_unflatten
    exec(f"glob = pretrained.model.{function_name}(x)")
  File "<string>", line 1, in <module>
  File "C:\Downloads\StableDiffusion\stable-diffusion-webui-directml\extensions\deforum-for-automatic1111-webui\scripts\deforum_helpers\src\midas\backbones\vit.py", line 47, in forward_flex
    x = self.patch_embed.proj(x).flatten(2).transpose(1, 2)
  File "c:\Downloads\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Downloads\StableDiffusion\stable-diffusion-webui-directml\extensions-builtin\Lora\lora.py", line 415, in lora_Conv2d_forward
    return torch.nn.Conv2d_forward_before_lora(self, input)
  File "c:\Downloads\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "c:\Downloads\StableDiffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
  File "C:\Downloads\StableDiffusion\stable-diffusion-webui-directml\modules\dml\amp\autocast_mode.py", line 32, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: pre_forward(op, args, kwargs))
  File "C:\Downloads\StableDiffusion\stable-diffusion-webui-directml\modules\dml\amp\autocast_mode.py", line 9, in pre_forward
    return forward(*args, **kwargs)
RuntimeError: Input type (float) and bias type (struct c10::Half) should be the same
*END OF TRACEBACK*

User friendly error message:
Error: Input type (float) and bias type (struct c10::Half) should be the same. Check your schedules/ init values please. Also make sure you don't have a backwards slash in any of your PATHs - use / instead of \.
Deforum progress:   1%|▌                                                               | 5/540 [00:07<13:15,  1.49s/it]
2023-06-17 20:25:11,709 - httpx - INFO - HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-06-17 20:25:11,713 - httpx - INFO - HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"
2023-06-17 20:25:19,950 - httpx - INFO - HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-06-17 20:25:19,960 - httpx - INFO - HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"`

Additional information

GPU - Radeon RX 580 Series VRAM - 8192 MB - GDDR5 2000 MHz

Greatpriceman commented 2 weeks ago

I'm getting the same thing, but I don't have deforum.

This is my issue whenever it starts to even try generating.

if this is a separate issue, please tell me and I'll remake this post to be a separate issue.


*** Arguments: ('task(378z8dr92dmot3i)', <gradio.routes.Request object at 0x00000205B73C5D50>, ' <lora:smv1:1> Pizza Tokyo Backgrounds', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "F:\stable-diffusion-webui-directml\modules\call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
      File "F:\stable-diffusion-webui-directml\modules\call_queue.py", line 53, in f
        res = func(*args, **kwargs)
      File "F:\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "F:\stable-diffusion-webui-directml\modules\txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "F:\stable-diffusion-webui-directml\modules\processing.py", line 849, in process_images
        res = process_images_inner(p)
      File "F:\stable-diffusion-webui-directml\modules\processing.py", line 1083, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "F:\stable-diffusion-webui-directml\modules\processing.py", line 1441, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "F:\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 233, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "F:\stable-diffusion-webui-directml\modules\sd_samplers_common.py", line 272, in launch_sampling
        return func()
      File "F:\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 233, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "F:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "F:\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "F:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "F:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\stable-diffusion-webui-directml\modules\sd_samplers_cfg_denoiser.py", line 249, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "F:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "F:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "F:\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "F:\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 22, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "F:\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 34, in __call__
        return self.__sub_func(self.__orig_func, *args, **kwargs)
      File "F:\stable-diffusion-webui-directml\modules\sd_hijack_unet.py", line 50, in apply_model
        result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)
      File "F:\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 22, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "F:\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 36, in __call__
        return self.__orig_func(*args, **kwargs)
      File "F:\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "F:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "F:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "F:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "F:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\stable-diffusion-webui-directml\modules\sd_unet.py", line 91, in UNetModel_forward
        return original_forward(self, x, timesteps, context, *args, **kwargs)
      File "F:\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
        h = module(h, emb, context)
      File "F:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "F:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 86, in forward
        x = layer(x)
      File "F:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "F:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 599, in network_Conv2d_forward
        return originals.Conv2d_forward(self, input)
      File "F:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
        return self._conv_forward(input, self.weight, self.bias)
      File "F:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
        return F.conv2d(input, weight, bias, self.stride,
    RuntimeError: Input type (float) and bias type (struct c10::Half) should be the same```