lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
7.96k stars 773 forks source link

[Bug]: #56

Closed Remy33f closed 8 months ago

Remy33f commented 8 months ago

Checklist

What happened?

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)

Steps to reproduce the problem

Installed Forge WebUI, generate first image then message appear

What should have happened?

Should chose the correct GPU

What browsers do you use to access the UI ?

Microsoft Edge

Sysinfo

sysinfo-2024-02-06-11-11.json

Console logs

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f0.0.7-latest-41-g6aee7a20
Commit hash: 6aee7a20329b4a0e10b87d841d680562bdde65c7
Launching Web UI with arguments:
Total VRAM 6144 MB, total RAM 32677 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1660 SUPER : native
VAE dtype: torch.float32
Using pytorch cross attention
ControlNet preprocessor location: E:\Forge\webui_forge_cu121_torch21\webui\models\ControlNetPreprocessor
Loading weights [2d5af23726] from E:\Forge\webui_forge_cu121_torch21\webui\models\Stable-diffusion\realismEngineSDXL_v30VAE.safetensors
2024-02-06 14:59:02,793 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 42.2s (prepare environment: 11.1s, import torch: 13.3s, import gradio: 4.5s, setup paths: 4.7s, initialize shared: 0.5s, other imports: 3.2s, load scripts: 2.9s, create ui: 0.6s, gradio launch: 1.7s).
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Couldn't find VAE named None; using None instead
To load target model SDXLClipModel
Begin to load 1 model
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
  File "threading.py", line 973, in _bootstrap
  File "threading.py", line 1016, in _bootstrap_inner
  File "threading.py", line 953, in run
  File "E:\Forge\webui_forge_cu121_torch21\webui\modules\initialize.py", line 162, in load_model
    shared.sd_model  # noqa: B018
  File "E:\Forge\webui_forge_cu121_torch21\webui\modules\shared_items.py", line 133, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "E:\Forge\webui_forge_cu121_torch21\webui\modules\sd_models.py", line 509, in get_sd_model
    load_model()
  File "E:\Forge\webui_forge_cu121_torch21\webui\modules\sd_models.py", line 614, in load_model
    sd_model.cond_stage_model_empty_prompt = get_empty_cond(sd_model)
  File "E:\Forge\webui_forge_cu121_torch21\webui\modules\sd_models.py", line 536, in get_empty_cond
    d = sd_model.get_learned_conditioning([""])
  File "E:\Forge\webui_forge_cu121_torch21\webui\modules\sd_models_xl.py", line 36, in get_learned_conditioning
    c = self.conditioner(sdxl_conds, force_zero_embeddings=['txt'] if force_zero_negative_prompt else [])
  File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\Forge\webui_forge_cu121_torch21\webui\repositories\generative-models\sgm\modules\encoders\modules.py", line 141, in forward
    emb_out = embedder(batch[embedder.input_key])
  File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\Forge\webui_forge_cu121_torch21\webui\modules\sd_hijack_clip.py", line 234, in forward
    z = self.process_tokens(tokens, multipliers)
  File "E:\Forge\webui_forge_cu121_torch21\webui\modules\sd_hijack_clip.py", line 273, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "E:\Forge\webui_forge_cu121_torch21\webui\modules_forge\forge_clip.py", line 50, in encode_with_transformers
    outputs = self.wrapped.transformer(tokens, output_hidden_states=self.wrapped.layer == "hidden")
  File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
    return self.text_model(
  File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\transformers\models\clip\modeling_clip.py", line 730, in forward
    hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
  File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\transformers\models\clip\modeling_clip.py", line 227, in forward
    inputs_embeds = self.token_embedding(input_ids)
  File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\Forge\webui_forge_cu121_torch21\webui\modules\sd_hijack.py", line 177, in forward
    inputs_embeds = self.wrapped(input_ids)
  File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward
    return F.embedding(
  File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\functional.py", line 2233, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)

Stable diffusion model failed to load
Loading weights [2d5af23726] from E:\Forge\webui_forge_cu121_torch21\webui\models\Stable-diffusion\realismEngineSDXL_v30VAE.safetensors
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Couldn't find VAE named None; using None instead
To load target model SDXLClipModel
Begin to load 1 model
Token merging is under construction now and the setting will not take effect.
*** Error completing request
*** Arguments: ('task(v1jhe6y27xuw8dt)', <gradio.routes.Request object at 0x000002357B55D330>, 'amateur cellphone photography  cute woman with blonde hair at mardi gras, sunset,  (freckles:0.2) . f8.0, samsung galaxy, noise, jpeg artefacts, poor lighting,  low light, underexposed, high contrast', '(watermark:1.2), (text:1.2), (logo:1.2), (3d render:1.2), drawing, painting, crayon', [], 25, 'DPM++ 2M Karras', 1, 1, 4, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_input_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_input_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_input_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), False, 1.01, 1.02, 0.99, 0.95, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0.5, 2, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "E:\Forge\webui_forge_cu121_torch21\webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "E:\Forge\webui_forge_cu121_torch21\webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "E:\Forge\webui_forge_cu121_torch21\webui\modules\txt2img.py", line 110, in txt2img
        processed = processing.process_images(p)
      File "E:\Forge\webui_forge_cu121_torch21\webui\modules\processing.py", line 736, in process_images
        sd_models.reload_model_weights()
      File "E:\Forge\webui_forge_cu121_torch21\webui\modules\sd_models.py", line 628, in reload_model_weights
        return load_model(info)
      File "E:\Forge\webui_forge_cu121_torch21\webui\modules\sd_models.py", line 614, in load_model
        sd_model.cond_stage_model_empty_prompt = get_empty_cond(sd_model)
      File "E:\Forge\webui_forge_cu121_torch21\webui\modules\sd_models.py", line 536, in get_empty_cond
        d = sd_model.get_learned_conditioning([""])
      File "E:\Forge\webui_forge_cu121_torch21\webui\modules\sd_models_xl.py", line 36, in get_learned_conditioning
        c = self.conditioner(sdxl_conds, force_zero_embeddings=['txt'] if force_zero_negative_prompt else [])
      File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\Forge\webui_forge_cu121_torch21\webui\repositories\generative-models\sgm\modules\encoders\modules.py", line 141, in forward
        emb_out = embedder(batch[embedder.input_key])
      File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\Forge\webui_forge_cu121_torch21\webui\modules\sd_hijack_clip.py", line 234, in forward
        z = self.process_tokens(tokens, multipliers)
      File "E:\Forge\webui_forge_cu121_torch21\webui\modules\sd_hijack_clip.py", line 273, in process_tokens
        z = self.encode_with_transformers(tokens)
      File "E:\Forge\webui_forge_cu121_torch21\webui\modules_forge\forge_clip.py", line 50, in encode_with_transformers
        outputs = self.wrapped.transformer(tokens, output_hidden_states=self.wrapped.layer == "hidden")
      File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
        return self.text_model(
      File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\transformers\models\clip\modeling_clip.py", line 730, in forward
        hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
      File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\transformers\models\clip\modeling_clip.py", line 227, in forward
        inputs_embeds = self.token_embedding(input_ids)
      File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\Forge\webui_forge_cu121_torch21\webui\modules\sd_hijack.py", line 177, in forward
        inputs_embeds = self.wrapped(input_ids)
      File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward
        return F.embedding(
      File "E:\Forge\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\functional.py", line 2233, in embedding
        return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)

---

Additional information

1660 Super/32Gb Ram/Ryzen 7

adnanT11 commented 8 months ago

had the same issue adding --always-gpu seems to fix it for me

lllyasviel commented 8 months ago

please do not use --always-gpu I attempted a fix at https://github.com/lllyasviel/stable-diffusion-webui-forge/commit/9c31b0ddcba42afcbda310b46750decd33b6ea2e please try again and see if it is working

adnanT11 commented 8 months ago

can confirm its working now Thanks!

lllyasviel commented 8 months ago

@adnanT11 do not close too soon. I added another better fix and please test again

thanks in advance!

adnanT11 commented 8 months ago

tested again after updating and its giving the same error from before RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

lllyasviel commented 8 months ago

thanks. I updated and used old fix. Please update and close this issue

Remy33f commented 8 months ago

@adnanT11 do not close too soon. I added another better fix and please test again

thanks in advance!

I used update.bat and restarted but the issue remains. Do I need to download the code one more time ? Thanks

lllyasviel commented 8 months ago

update again please because i used old fix in last commit

lllyasviel commented 8 months ago

Please close this issue if fixed

Remy33f commented 8 months ago

Please close this issue if fixed

Thanks! It works now.

Hansynily commented 8 months ago

It fixed for me but enabling animatediff (https://github.com/continue-revolution/sd-forge-animatediff), problem still persist

Traceback (most recent call last):
  File "C:\AI\A1111\webui\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "C:\AI\A1111\webui\modules\call_queue.py", line 36, in f
    res = func(*args, **kwargs)
  File "C:\AI\A1111\webui\modules\txt2img.py", line 110, in txt2img
    processed = processing.process_images(p)
  File "C:\AI\A1111\webui\modules\processing.py", line 749, in process_images
    res = process_images_inner(p)
  File "C:\AI\A1111\webui\modules\processing.py", line 920, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "C:\AI\A1111\webui\modules\processing.py", line 1275, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "C:\AI\A1111\webui\modules\sd_samplers_kdiffusion.py", line 251, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "C:\AI\A1111\webui\modules\sd_samplers_common.py", line 260, in launch_sampling
    return func()
  File "C:\AI\A1111\webui\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "C:\AI\A1111\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\AI\A1111\webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "C:\AI\A1111\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\AI\A1111\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\AI\A1111\webui\modules\sd_samplers_cfg_denoiser.py", line 179, in forward
    denoised = forge_sampler.forge_sample(self, denoiser_params=denoiser_params,
  File "C:\AI\A1111\webui\modules_forge\forge_sampler.py", line 82, in forge_sample
    denoised = sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options, seed)
  File "C:\AI\A1111\webui\ldm_patched\modules\samplers.py", line 282, in sampling_function
    cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
  File "C:\AI\A1111\webui\ldm_patched\modules\samplers.py", line 251, in calc_cond_uncond_batch
    output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep_, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
  File "C:\AI\A1111\webui\extensions\sd-forge-animatediff\scripts\animatediff_infv2v.py", line 132, in mm_sd_forward        out = apply_model(
  File "C:\AI\A1111\webui\ldm_patched\modules\model_base.py", line 85, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "C:\AI\A1111\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\AI\A1111\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\AI\A1111\webui\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 860, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
  File "C:\AI\A1111\webui\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 57, in forward_timestep_embed
    x = modifier(x, 'after', layer, layer_index, ts, transformer_options)
  File "C:\AI\A1111\webui\extensions\sd-forge-animatediff\scripts\animatediff_mm.py", line 82, in mm_block_modifier
    return self.mm.down_blocks[mm_idx0].motion_modules[mm_idx1](x)
  File "C:\AI\A1111\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\AI\A1111\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\AI\A1111\webui\extensions\sd-forge-animatediff\motion_module.py", line 127, in forward
    return self.temporal_transformer(x)
  File "C:\AI\A1111\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\AI\A1111\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\AI\A1111\webui\extensions\sd-forge-animatediff\motion_module.py", line 185, in forward
    hidden_states = block(hidden_states)
  File "C:\AI\A1111\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\AI\A1111\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\AI\A1111\webui\extensions\sd-forge-animatediff\motion_module.py", line 239, in forward
    hidden_states = attention_block(norm_hidden_states) + hidden_states
  File "C:\AI\A1111\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\AI\A1111\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\AI\A1111\webui\extensions\sd-forge-animatediff\motion_module.py", line 329, in forward
    x = self.pos_encoder(x)
  File "C:\AI\A1111\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\AI\A1111\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\AI\A1111\webui\extensions\sd-forge-animatediff\motion_module.py", line 264, in forward
    x = x + self.pe[:, :x.size(1)]
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
wensleyoliv commented 8 months ago

DPM++ 2M Karras is working fine, but when i try to use Euler A i get this same error

*** Error completing request                                                                    | 0/30 [00:00<?, ?it/s]
*** Arguments: ('task(n6q4gm5cjrftp90)', <gradio.routes.Request object at 0x000001D76C8617E0>, '1girl,', 'lowres, bad hands, missing fingers, duplicate, bad anatomy, fused fingers, bad quality, worst quality, extra fingers, clone, cloned face, monochrome, ', [], 30, 'Euler a', 1, 1, 8.5, 1216, 832, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_input_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_input_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_input_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), False, 1.3, 1.4, 0.9, 0.2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0.5, 2, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "D:\webui_forge\webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\webui_forge\webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\webui_forge\webui\modules\txt2img.py", line 110, in txt2img
        processed = processing.process_images(p)
      File "D:\webui_forge\webui\modules\processing.py", line 749, in process_images
        res = process_images_inner(p)
      File "D:\webui_forge\webui\modules\processing.py", line 920, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "D:\webui_forge\webui\modules\processing.py", line 1275, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "D:\webui_forge\webui\modules\sd_samplers_kdiffusion.py", line 251, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "D:\webui_forge\webui\modules\sd_samplers_common.py", line 260, in launch_sampling
        return func()
      File "D:\webui_forge\webui\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "D:\webui_forge\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "D:\webui_forge\webui\repositories\k-diffusion\k_diffusion\sampling.py", line 149, in sample_euler_ancestral
        d = to_d(x, sigmas[i], denoised)
      File "D:\webui_forge\webui\repositories\k-diffusion\k_diffusion\sampling.py", line 48, in to_d
        return (x - denoised) / utils.append_dims(sigma, x.ndim)
    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
wensleyoliv commented 8 months ago

DPM++ 2M Karras is working fine, but when i try to use Euler A i get this same error

*** Error completing request                                                                    | 0/30 [00:00<?, ?it/s]
*** Arguments: ('task(n6q4gm5cjrftp90)', <gradio.routes.Request object at 0x000001D76C8617E0>, '1girl,', 'lowres, bad hands, missing fingers, duplicate, bad anatomy, fused fingers, bad quality, worst quality, extra fingers, clone, cloned face, monochrome, ', [], 30, 'Euler a', 1, 1, 8.5, 1216, 832, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_input_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_input_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_input_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), False, 1.3, 1.4, 0.9, 0.2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0.5, 2, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "D:\webui_forge\webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\webui_forge\webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\webui_forge\webui\modules\txt2img.py", line 110, in txt2img
        processed = processing.process_images(p)
      File "D:\webui_forge\webui\modules\processing.py", line 749, in process_images
        res = process_images_inner(p)
      File "D:\webui_forge\webui\modules\processing.py", line 920, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "D:\webui_forge\webui\modules\processing.py", line 1275, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "D:\webui_forge\webui\modules\sd_samplers_kdiffusion.py", line 251, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "D:\webui_forge\webui\modules\sd_samplers_common.py", line 260, in launch_sampling
        return func()
      File "D:\webui_forge\webui\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "D:\webui_forge\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "D:\webui_forge\webui\repositories\k-diffusion\k_diffusion\sampling.py", line 149, in sample_euler_ancestral
        d = to_d(x, sigmas[i], denoised)
      File "D:\webui_forge\webui\repositories\k-diffusion\k_diffusion\sampling.py", line 48, in to_d
        return (x - denoised) / utils.append_dims(sigma, x.ndim)
    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

I updated and it's fixed.