continue-revolution / sd-webui-animatediff

AnimateDiff for AUTOMATIC1111 Stable Diffusion WebUI
Other
3.11k stars 258 forks source link

[Bug]: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper_CUDA___slow_conv2d_forward) #302

Closed chlolch closed 1 year ago

chlolch commented 1 year ago

Is there an existing issue for this?

Have you read FAQ on README?

What happened?

when use img2img ->batch and openpose controlnet,this bug is Present this bug is random,the Temporary resolution is delete some prompts,it can work well。

Steps to reproduce the problem

img2img ->batch . ... openpose controlnet

What should have happened?

no random bug

Commit where the problem happens

webui: 1.6.0 extension: 1.20.0

What browsers do you use to access the UI ?

No response

Command Line Arguments

No

Console logs

2023-11-11 12:27:26,894 - ControlNet - INFO - ControlNet Hooked - Time = 0.1634221076965332
  0%|          | 0/35 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(0w9isgmhnu8y2vs)', 0, '(masterpiece),(best quality:1),(ultra highres:1),wallpaper,detailed illustration,detailed beautiful skin,dewy skin,sweaty skin,(front focus),<lora:add_detail:0.8>,<lora:VeryLongLegs_v2:0.4>,(very_long_legs:0.8),(torn pantyhose:1),<lora:GoodHands-vanilla:1>,((1girl, pointy ears, solo, blonde hair, armor, arrow (projectile), crown,  elf, long hair, boots,forest:1.5,cape)),(green short sleeve:1.2)', '(badhandv4),easynegative,bad_pictures,(worst quality:2),(low quality:2),(normal quality:2),lowres,bad anatomy,bad hands,normal quality,((monochrome)),((grayscale)),((watermark)),uneven eyes,lazy eye,monochrome,zombie,nsfw,', [], <PIL.Image.Image image mode=RGBA size=1080x1920 at 0x7FB6AA8F1180>, None, None, None, None, None, None, 35, 'DPM++ 2M Karras', 4, 0, 1, 1, 1, 7, 1.5, 1, 0, 1280, 720, 1, 0, 0, 32, 0, '/root/autodl-tmp/src', '/root/autodl-tmp/diff', '/root/autodl-tmp/smallmask', ['Clip skip: 2'], False, [], '', <gradio.routes.Request object at 0x7fb6aa9cd030>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, <scripts.animatediff_ui.AnimateDiffProcess object at 0x7fb6aa9ce890>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x7fb6a011be50>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x7fb6a011a9b0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x7fb6aa9ce8c0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x7fb6a1f6fbb0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x7fb6aa9cdc60>, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', 'None', 1, 'None', False, False, 'PreviousFrame', 'src', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, None, None, False, None, None, False, 50) {}
    Traceback (most recent call last):
      File "/root/stable-diffusion-webui/modules/call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "/root/stable-diffusion-webui/modules/call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "/root/stable-diffusion-webui/modules/img2img.py", line 208, in img2img
        processed = process_images(p)
      File "/root/stable-diffusion-webui/modules/processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "/root/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff_cn.py", line 118, in hacked_processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "/root/stable-diffusion-webui/modules/processing.py", line 867, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "/root/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 451, in process_sample
        return process.sample_before_CN_hack(*args, **kwargs)
      File "/root/stable-diffusion-webui/modules/processing.py", line 1528, in sample
        samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
      File "/root/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 188, in sample_img2img
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "/root/stable-diffusion-webui/modules/sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "/root/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 188, in <lambda>
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "/root/.sdvenv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "/root/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "/root/.sdvenv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/root/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff_infv2v.py", line 269, in mm_cfg_forward
        x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
      File "/root/.sdvenv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/root/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "/root/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "/root/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "/root/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "/root/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "/root/.sdvenv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/root/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "/root/.sdvenv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/root/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 858, in forward_webui
        raise e
      File "/root/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 855, in forward_webui
        return forward(*args, **kwargs)
      File "/root/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 592, in forward
        control = param.control_model(x=x_in, hint=hint, timesteps=timesteps, context=context, y=y)
      File "/root/.sdvenv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/root/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/cldm.py", line 31, in forward
        return self.control_model(*args, **kwargs)
      File "/root/.sdvenv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/root/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/cldm.py", line 300, in forward
        guided_hint = self.input_hint_block(hint, emb, context)
      File "/root/.sdvenv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/root/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/openaimodel.py", line 102, in forward
        x = layer(x)
      File "/root/.sdvenv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/root/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 444, in network_Conv2d_forward
        return originals.Conv2d_forward(self, input)
      File "/root/.sdvenv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 463, in forward
        return self._conv_forward(input, self.weight, self.bias)
      File "/root/.sdvenv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
        return F.conv2d(input, weight, bias, self.stride,
    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper_CUDA___slow_conv2d_forward)

Additional information

No response

continue-revolution commented 1 year ago

please give me your prompts. Better to give me a screenshot of your webui and all your command line arguments so that I can replicate your error.

continue-revolution commented 1 year ago

I am not able to reproduce your error. When you are able to replicate your own error, please follow above so that I can replicate.

chlolch commented 1 year ago

bug.zip all the detail file is in the bug.zip

continue-revolution commented 1 year ago

Thanks. I will try to replicate your errror.

continue-revolution commented 1 year ago

我需要你的UI截图,指的是img2img ui截图,尤其是WebUI (如width, height), ControlNet, AnimateDiff, 还有你的control image总共有多少张

continue-revolution commented 1 year ago

alright, I understand. https://github.com/continue-revolution/sd-webui-animatediff#how-to-use item 3.

chlolch commented 1 year ago

UI截图.zip 不管多少张都会出现,我用4张图只是为了节省测试时间。上面那个bug.zip里有我的录屏,和4张control image

continue-revolution commented 1 year ago

如果是断在你那个断点那里,就说明你没有勾选pad cond/uncond,看一下readme https://github.com/continue-revolution/sd-webui-animatediff#how-to-use item 3.

chlolch commented 1 year ago

谢谢,现在正常工作了,之前没注意这个可选项。。