Mikubill / sd-webui-controlnet

WebUI extension for ControlNet
GNU General Public License v3.0
16.97k stars 1.95k forks source link

[Bug]: RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of Float #2147

Closed KrakenJet closed 11 months ago

KrakenJet commented 1 year ago

Is there an existing issue for this?

What happened?

This error keeps popping up when I try to use controlnet's model control_v11p_sd15_canny [d14c016b] for image2image. What should I do? My specs are GTX1650 4GB and cpu is Ryzen 5 3550H 2100-3700 mghz, RAM 16gb. My command list for web.ui Automatic1111 - --xformers --upcast-sampling --precision full --no-half-vae --opt-split-attention

Steps to reproduce the problem

  1. Go to SD 1.5 Automatic1111
  2. Press ControlNet and put pic inside
  3. Press generate and I get the error RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of Float

What should have happened?

Should've gotten a generated image with controlnet

Commit where the problem happens

webui: controlnet: here

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

-xformers --upcast-sampling --precision full --no-half-vae --opt-split-attention --lowvram
tried deleting commands one by one and it didn't work

List of enabled extensions

  1. sd-webui-controlnet
  2. ultimate-upscale-for-automatic1111

Console logs

*** Error completing request
*** Arguments: ('task(z9530ekf9leu7so)', '', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000026410062C20>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x00000263FEC735B0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x00000263FEC73130>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000026410060DC0>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
    Traceback (most recent call last):
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\modules\processing.py", line 867, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 451, in process_sample
        return process.sample_before_CN_hack(*args, **kwargs)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\modules\processing.py", line 1140, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 169, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 26, in __call__
        return self.__sub_func(self.__orig_func, *args, **kwargs)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\modules\sd_hijack_unet.py", line 48, in apply_model
        return orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs).float()
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 858, in forward_webui
        raise e
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 855, in forward_webui
        return forward(*args, **kwargs)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 592, in forward
        control = param.control_model(x=x_in, hint=hint, timesteps=timesteps, context=context, y=y)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 31, in forward
        return self.control_model(*args, **kwargs)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 314, in forward
        h = module(h, emb, context)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 98, in forward
        x = layer(x, emb)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 317, in forward
        return checkpoint(
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 167, in checkpoint
        return func(*inputs)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 329, in _forward
        h = self.in_layers(x)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
        input = module(input)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 275, in forward
        return super().forward(x.float()).type(x.dtype)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 459, in network_GroupNorm_forward
        return originals.GroupNorm_forward(self, input)
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
        return F.group_norm(
      File "C:\Users\User\Desktop\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
        return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
    RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of Float

Additional information

No response

Wetlander commented 11 months ago

Having same problem for my machine.

What happened? Error occurs when using any form of controlnet

Steps to reproduce the problem Go to SD 1.5 Automatic1111 enabled controlnet, put variety of images in preprocessor runs to generate needed control image, then i press generate, afer 1-8 seconds comes back with error

What should have happened? Not give an error, but give me an posed image, upscaled image, or what i requested.

Commit where the problem happens has happened since install 2 weeks ago.

What browsers do you use to access the UI ? Google Chrome

Command Line Arguments --api --xformers --precision full --upcast-sampling --no-half-vae --medvram --always-batch-cond-uncond --opt-split-attention --opt-channelslast --enable-insecure-extension-access --disable-nan-check --theme dark tried deleting commands one by one and it didn't work

List of enabled extensions OneButtonPrompt a1111-sd-webui-tagcomplete adetailer canvas-zoom deforum-for-automatic1111-webui model-keyword multidiffusion-upscaler-for-automatic1111 openpose-editor sd-dynamic-prompts sd-extension-system-info sd-webui-3d-open-pose-editor sd-webui-aspect-ratio-helper sd-webui-controlnet sd-webui-mov2mov sd-webui-openpose-editor sd-webui-roop stable-diffusion-webui-rembg stable-diffusion-webui-ux ultimate-upscale-for-automatic1111

system arch: AMD64 cpu: Intel64 Family 6 Model 158 Stepping 13, GenuineIntel system: Windows release: Windows-10-10.0.19045-SP0 python: 3.10.6

gpu device: NVIDIA GeForce GTX 1660 Ti (1) (compute_37) (7, 5) cuda: 11.8 cudnn: 8700 driver: 531.41

Console: `2023-11-21 14:50:35,270 - ControlNet - INFO - Loading model: control_v11p_sd15_inpaint [ebff9138]████████████████████████████████████| 20/20 [00:52<00:00, 2.64s/it] 2023-11-21 14:50:37,613 - ControlNet - INFO - Loaded state_dict from [C:\Apache24\htdocs\AI\stable-diffusion-webui\models\ControlNet\control_v11p_sd15_inpaint.pth] 2023-11-21 14:50:37,614 - ControlNet - INFO - controlnet_default_config 2023-11-21 14:50:40,270 - ControlNet - INFO - ControlNet model control_v11p_sd15_inpaint [ebff9138] loaded. 2023-11-21 14:50:40,383 - ControlNet - INFO - using inpaint as input 2023-11-21 14:50:40,426 - ControlNet - INFO - Loading preprocessor: inpaint_only+lama 2023-11-21 14:50:40,464 - ControlNet - INFO - preprocessor resolution = -1 Downloading: "https://huggingface.co/lllyasviel/Annotators/resolve/main/ControlNetLama.pth" to C:\Apache24\htdocs\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads\lama\ControlNetLama.pth

100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 195M/195M [00:18<00:00, 10.8MB/s] 2023-11-21 14:51:11,584 - ControlNet - INFO - ControlNet used torch.float32 VAE to encode torch.Size([1, 4, 64, 192]). 2023-11-21 14:51:11,651 - ControlNet - INFO - ControlNet Hooked - Time = 36.7220938205719 2023-11-21 14:51:13,394 - ControlNet - INFO - [ControlNet] Initial noise hack applied to torch.Size([1, 4, 64, 192]). 0%| | 0/17 [00:03<?, ?it/s] Error completing request Arguments: ('task(gcxvsjo7862761e)', 0, "Enchanting forest glade with (sunbeamsfiltering through trees), (natural lighting), \n\n(mossy rocks), (dirt path:1.3),\n(delicate wildflowers), (butterflies), (birds), (mushrooms),\n ,\n\n(wide shot), <lora:Bird'sEye_Drone_last:3>, from above,\n\nhidden objects games, video game concept art, (8K Unity wallpaper), fine details, award-winning image, highly detailed, 16k, cinematic perspective, ((video game environment concept art style)), pretty colors, cinematic environment,\n(handmade2d:1.0) style ", ' human, people, 1girl, 1boy, NSFW, nude, cleavage, house, cabin, hut,\n\nng_deepnegative_v1_75t, bad-artist, bad-hands-5, an11,\n(worst quality, low quality:1.4), multiple views, multiple panels, blurry, watermark, letterbox, text, character, signature, medium quality, deleted, lowres, comic, frame, watermark, signature,watermark,signature\n', [], <PIL.Image.Image image mode=RGBA size=832x512 at 0x1D4E46BE530>, None, None, None, None, None, None, 20, 'DPM++ 2M Karras', 4, 0, 1, 1, 1, 7, 1.5, 0.8, 0, 512, 1536, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x000001D43ECF5B70>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, 'keyword prompt', 'keyword1, keyword2', 'None', 'textual inversion first', 'None', '0.7', 'None', False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 960, 64, True, True, True, False, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', UiControlNetUnit(enabled=True, module='inpaint_only+lama', model='control_v11p_sd15_inpaint [ebff9138]', weight=1, image={'image': array([[[178, 194, 146], [166, 190, 144], [105, 134, 91], ..., [ 28, 31, 19], [ 25, 29, 17], [ 27, 28, 19]],


[[191, 210, 158], [181, 211, 160], [120, 152, 103], ..., [ 27, 30, 20], [ 24, 32, 18], *** [ 24, 30, 18]],


[[163, 183, 131], [172, 206, 147], [149, 185, 131], ..., [ 27, 31, 19], [ 27, 33, 19], *** [ 25, 30, 19]],


*** ...,


[[ 9, 17, 13], [ 8, 15, 11], [ 8, 15, 10], ..., [ 3, 2, 2], [ 2, 3, 1], *** [ 1, 4, 1]],


[[ 8, 17, 10], [ 6, 15, 7], [ 7, 16, 10], ..., [ 1, 1, 0], [ 1, 3, 2], *** [ 1, 2, 0]],


[[ 5, 14, 10], [ 6, 13, 8], [ 6, 13, 10], ..., [ 2, 0, 1], [ 1, 0, 2], [ 0, 0, 0]]], dtype=uint8), 'mask': array([[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], *** [0, 0, 0]],


[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], *** [0, 0, 0]],


[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], *** [0, 0, 0]],


*** ...,


[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], *** [0, 0, 0]],


[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], *** [0, 0, 0]],


[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], [0, 0, 0]]], dtype=uint8)}, resize_mode='Resize and Fill', low_vram=True, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='ControlNet is more important', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), None, False, '0', 'C:\Apache24\htdocs\AI\stable-diffusion-webui\models\roop\inswapper_128.onnx', 'CodeFormer', 1, '', 1, 1, False, True, ' CFG Scale should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, 5, 'all', 'all', 'all', '', '', '', '1', 'none', False, '', '', 'comma', '', True, '', '20', 'all', 'all', 'all', 'all', 0, '', None, None, False, None, None, False, None, None, False, 50, ' CFG Scale should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, '

Will upscale the image depending on the selected target size type

', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {} Traceback (most recent call last): File "C:\Apache24\htdocs\AI\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(
args,
kwargs)) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, kwargs) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\modules\img2img.py", line 208, in img2img processed = process_images(p) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\modules\processing.py", line 732, in process_images res = process_images_inner(p) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, *kwargs) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\modules\processing.py", line 867, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 420, in process_sample return process.sample_before_CN_hack(args, kwargs) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\modules\processing.py", line 1528, in sample samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 188, in sample_img2img samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling return func() File "C:\Apache24\htdocs\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 188, in samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, *extra_args) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 169, in forward x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in)) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), *kwargs) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(args, kwargs) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, kwargs: self(*args, *kwargs)) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 26, in call return self.__sub_func(self.__orig_func, args, kwargs) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\modules\sd_hijack_unet.py", line 48, in apply_model return orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, kwargs).float() File "C:\Apache24\htdocs\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, cond) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl result = forward_call(*args, kwargs) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 827, in forward_webui raise e File "C:\Apache24\htdocs\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 824, in forward_webui return forward(args, kwargs) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 561, in forward control = param.control_model(x=x_in, hint=hint, timesteps=timesteps, context=context, y=y) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 31, in forward return self.control_model(*args, *kwargs) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 314, in forward h = module(h, emb, context) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 98, in forward x = layer(x, emb) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 317, in forward return checkpoint( File "C:\Apache24\htdocs\AI\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 167, in checkpoint return func(inputs) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 329, in _forward h = self.in_layers(x) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward input = module(input) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 275, in forward return super().forward(x.float()).type(x.dtype) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 459, in network_GroupNorm_forward return originals.GroupNorm_forward(self, input) File "C:\Apache24\htdocs\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward return F.group_norm( File "C:\Apache24\htdocs\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled) RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of Float


Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.6.0-2-g4afaaf8a Commit hash: 4afaaf8a020c1df457bcf7250cb1c7f609699fa7 Installing sd-webui-controlnet requirement: changing opencv-python version from 4.7.0.72 to 4.8.0 Installing requirements for Mov2mov Installing requirements for imageio-ffmpeg Checking roop requirements Install insightface==0.7.3 Installing sd-webui-roop requirement: insightface==0.7.3 Install onnx==1.14.0 Installing sd-webui-roop requirement: onnx==1.14.0 Install onnxruntime==1.15.0 Installing sd-webui-roop requirement: onnxruntime==1.15.0 Install opencv-python==4.7.0.72 Installing sd-webui-roop requirement: opencv-python==4.7.0.72 Launching Web UI with arguments: --api --xformers --precision full --upcast-sampling --no-half-vae --medvram --always-batch-cond-uncond --opt-split-attention --opt-channelslast --enable-insecure-extension-access --disable-nan-check --theme dark 2023-11-21 14:58:00.325408: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0. WARNING:tensorflow:From C:\Apache24\htdocs\AI\stable-diffusion-webui\venv\lib\site-packages\keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.

[-] ADetailer initialized. version: 23.11.1, num models: 9 2023-11-21 14:58:13,227 - ControlNet - INFO - ControlNet v1.1.418 ControlNet preprocessor location: C:\Apache24\htdocs\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads 2023-11-21 14:58:13,423 - ControlNet - INFO - ControlNet v1.1.418 2023-11-21 14:58:14,063 - roop - INFO - roop v0.0.2 2023-11-21 14:58:14,174 - roop - INFO - roop v0.0.2 Loading weights [879db523c3] from C:\Apache24\htdocs\AI\stable-diffusion-webui\models\Stable-diffusion\dreamshaper_8.safetensors Creating model from config: C:\Apache24\htdocs\AI\stable-diffusion-webui\configs\v1-inference.yaml Deforum ControlNet support: enabled Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Startup time: 52.3s (prepare environment: 31.1s, import torch: 11.9s, import gradio: 0.7s, setup paths: 0.9s, initialize shared: 0.2s, other imports: 0.6s, setup codeformer: 0.1s, load scripts: 4.6s, create ui: 1.7s, gradio launch: 0.3s, add APIs: 0.1s). Applying attention optimization: xformers... done. No Image data blocks found. No Image data blocks found. Model loaded in 11.7s (load weights from disk: 0.8s, create model: 1.6s, apply weights to model: 0.6s, apply channels_last: 0.4s, load textual inversion embeddings: 5.6s, calculate empty prompt: 2.5s).`

huchenlei commented 11 months ago

I don't think you are having the same issue based on stack trace. Please open a new issue and I will take a further look there.