guoyww / AnimateDiff

Official implementation of AnimateDiff.
https://animatediff.github.io
Apache License 2.0
9.9k stars 811 forks source link

RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor #234

Closed zhulinyv closed 7 months ago

zhulinyv commented 7 months ago

I got the error and I do not konw how to solve it. Here is the full log:

SD-WebUI Launcher Diagnostic File

Date: 2023-12-06 23:15:25
Launcher Version: 2.6.14.216
Data File Version: 2023-11-25 13:16
SD-WebUI Version: 40ac134c553ac824d4a96666bba14d550300daa5 (2023-11-25 12:35:09)
Working Directory: E:\AIdraw\stable-diffusion-webui
------------------------
System Information: 
OS: Microsoft Windows NT 10.0.22631.0
CPU: 12 cores
Memory Size: 16384 MB
Page File Size: 22987 MB

NVIDIA Management Library:
  NVIDIA Driver Version: 546.17
  NVIDIA Management Library Version: 12.546.17

CUDA Driver:
  Version: 12030
  Devices: 
    00000000:01:00.0 0: NVIDIA GeForce GTX 1650 Ti [75] 4 GB

NvApi:
  Version: 54617 r545_96

DirectML Driver: 
  Devices: 
    8085 0: NVIDIA GeForce GTX 1650 Ti 3 GB

Intel Level Zero Driver:
  Not Available
Log: 
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.6.0-378-g40ac134c
Commit hash: 40ac134c553ac824d4a96666bba14d550300daa5
[Auto-Photoshop-SD] Attempting auto-update...
[Auto-Photoshop-SD] switch branch to extension branch.
checkout_result: Your branch is up to date with 'origin/master'.

[Auto-Photoshop-SD] Current Branch.
branch_result: * master

[Auto-Photoshop-SD] Fetch upstream.
fetch_result: 
[Auto-Photoshop-SD] Pull upstream.
pull_result: Already up to date.
Installing requirements for diffusers
ReActor preheating... Device: CUDA
loading Smart Crop reqs from E:\AIdraw\stable-diffusion-webui\extensions\sd_smartprocess\requirements.txt
Checking Smart Crop requirements.
Launching Web UI with arguments: --medvram --theme dark --xformers --precision full --no-half --no-half-vae --api --autolaunch --enable-insecure-extension-access --allow-code
2023-12-06 23:09:59.166513: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
WARNING:tensorflow:From E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.

python_server_full_path:  E:\AIdraw\stable-diffusion-webui\extensions\Auto-Photoshop-StableDiffusion-Plugin\server/python_server
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 23.11.1, num models: 14
2023-12-06 23:10:14,050 - ControlNet - INFO - ControlNet v1.1.419
ControlNet preprocessor location: E:\AIdraw\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2023-12-06 23:10:14,251 - ControlNet - INFO - ControlNet v1.1.419
sd-webui-prompt-all-in-one background API service started successfully.
23:10:15 - ReActor - STATUS - Running v0.6.0-a1
Loading weights [31cc8c676e] from E:\AIdraw\stable-diffusion-webui\models\Stable-diffusion\style_model\PVC-AO.safetensors
2023-12-06 23:10:16,903 - AnimateDiff - INFO - Injecting LCM to UI.
2023-12-06 23:10:17,409 - AnimateDiff - INFO - Hacking i2i-batch.
Creating model from config: E:\AIdraw\stable-diffusion-webui\configs\v1-inference.yaml
2023-12-06 23:10:18,352 - modelscope - INFO - PyTorch version 2.1.0+cu121 Found.
2023-12-06 23:10:18,357 - modelscope - INFO - TensorFlow version 2.15.0 Found.
2023-12-06 23:10:18,357 - modelscope - INFO - Loading ast index from E:\AIdraw\stable-diffusion-webui\.cache\modelscope\hub\ast_indexer
2023-12-06 23:10:18,676 - modelscope - INFO - Loading done! Current index file version is 1.9.5, with md5 398524fe8333dd6d8afbf619ceff43e0 and a total number of 945 components indexed
[['E:\\AIdraw\\stable-diffusion-webui\\extensions\\facechain/resources/inpaint_template\\1.jpg'], ['E:\\AIdraw\\stable-diffusion-webui\\extensions\\facechain/resources/inpaint_template\\2.jpg'], ['E:\\AIdraw\\stable-diffusion-webui\\extensions\\facechain/resources/inpaint_template\\3.jpg'], ['E:\\AIdraw\\stable-diffusion-webui\\extensions\\facechain/resources/inpaint_template\\4.jpg'], ['E:\\AIdraw\\stable-diffusion-webui\\extensions\\facechain/resources/inpaint_template\\5.jpg']]
[]
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
[sd-webui-comfyui] Started callback listeners for process webui
[ComfyUI] [sd-webui-comfyui] Setting up IPC...
[ComfyUI] [sd-webui-comfyui] Using inter-process communication strategy: Shared memory
[ComfyUI] [sd-webui-comfyui] Started callback listeners for process comfyui
[ComfyUI] [sd-webui-comfyui] Patching ComfyUI...
Startup time: 83.8s (prepare environment: 47.0s, import torch: 5.7s, import gradio: 2.0s, setup paths: 12.5s, initialize shared: 0.5s, other imports: 0.8s, setup codeformer: 0.2s, list SD models: 0.2s, load scripts: 4.6s, create ui: 4.6s, gradio launch: 2.6s, add APIs: 0.1s, app_started_callback: 3.0s).
[ComfyUI] Total VRAM 4096 MB, total RAM 16292 MB
[ComfyUI] Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --normalvram
WARNING:xformers:A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
[ComfyUI] xformers version: 0.0.22.post7
2023-12-06 23:10:30.069288: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
Loading VAE weights specified in settings: E:\AIdraw\stable-diffusion-webui\models\VAE\ClearVAE-Variant.safetensors
Applying attention optimization: xformers... done.
Model loaded in 19.4s (load weights from disk: 1.2s, create model: 0.4s, apply weights to model: 13.1s, apply float(): 1.1s, load VAE: 1.2s, load textual inversion embeddings: 1.2s, calculate empty prompt: 0.9s).
WARNING:tensorflow:From E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.

[ComfyUI] Set vram state to: LOW_VRAM
[ComfyUI] Device: cuda:0 NVIDIA GeForce GTX 1650 Ti : native
[ComfyUI] VAE dtype: torch.float32
[ComfyUI] Using xformers cross attention
[ComfyUI] [sd-webui-comfyui] Launching ComfyUI with arguments: --disable-smart-memory --listen 127.0.0.1 --port 8189
[ComfyUI] ** ComfyUI start up time: 2023-12-06 23:10:39.643326
[ComfyUI] Adding extra search path checkpoints ../stable-diffusion-webui\models/Stable-diffusion
[ComfyUI] Adding extra search path configs ../stable-diffusion-webui\models/Stable-diffusion
[ComfyUI] Adding extra search path vae ../stable-diffusion-webui\models/VAE
[ComfyUI] Adding extra search path loras ../stable-diffusion-webui\models/Lora
[ComfyUI] Adding extra search path loras ../stable-diffusion-webui\models/LyCORIS
[ComfyUI] Adding extra search path upscale_models ../stable-diffusion-webui\models/ESRGAN
[ComfyUI] Adding extra search path upscale_models ../stable-diffusion-webui\models/RealESRGAN
[ComfyUI] Adding extra search path upscale_models ../stable-diffusion-webui\models/SwinIR
[ComfyUI] Adding extra search path embeddings ../stable-diffusion-webui\embeddings
[ComfyUI] Adding extra search path hypernetworks ../stable-diffusion-webui\models/hypernetworks
[ComfyUI] Adding extra search path controlnet ../stable-diffusion-webui\models/ControlNet
[ComfyUI] Adding extra search path controlnet ../stable-diffusion-webui\extensions/sd-webui-controlnet/models
[ComfyUI] Adding extra search path animediff ../stable-diffusion-webui\extensions/sd-webui-animatediff/model
[AnimateDiffEvo] - ERROR - No motion models found. Please download one and place in: ['E:\\AIdraw\\ComfyUI\\custom_nodes\\ComfyUI-AnimateDiff-Evolved\\models']
[ComfyUI] ### Loading: ComfyUI-Impact-Pack (V4.38.2)
[ComfyUI] ### Loading: ComfyUI-Impact-Pack (Subpack: V0.3.2)
[ComfyUI] ### Loading: ComfyUI-Manager (V1.6.4)
[ComfyUI] ### ComfyUI Revision: 1787 [e1345473] | Released on '2023-12-06'
[ComfyUI] FizzleDorf Custom Nodes: Loaded
2023-12-06 23:10:42,318 - roop - INFO - roop v0.0.2
[ComfyUI] Total VRAM 4096 MB, total RAM 16292 MB
[ComfyUI] Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --normalvram
[ComfyUI] xformers version: 0.0.22.post7
[ComfyUI] Set vram state to: LOW_VRAM
[ComfyUI] Device: cuda:0 NVIDIA GeForce GTX 1650 Ti : native
[ComfyUI] VAE dtype: torch.float32
[ComfyUI] WAS Node Suite: OpenCV Python FFMPEG support is enabled
[ComfyUI] WAS Node Suite: `ffmpeg_bin_path` is set to: D:\Software\ffmpeg-5.0.1-full_build\bin
[ComfyUI] WAS Node Suite: Finished. Loaded 187 nodes successfully.
[ComfyUI] 
    "Success is not just about making money. It's about making a difference." - Unknown

[ComfyUI] 
Import times for custom nodes:
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\NK_4way Image Switch_v2.py
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\Nk_4way Latent Switch.py
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\brightness_contrast_ally.py
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\Pseudo_HDR_ally.py
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\sharpness_ally.py
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\gaussian_blur_ally.py
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\brightness_contrast_ally..py
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\Nk_vaeswitch.py
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\imageflip_ally.py
[ComfyUI]    0.0 seconds: E:\AIdraw\stable-diffusion-webui\extensions\sd-webui-comfyui\comfyui_custom_nodes\webui_save_image.py
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\bsz-auto-hires.py
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\histogram_equalization.py
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\SDXLAspectRatio.py
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\Nk_imgInputSwitch3Way.py
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\saturation_ally.py
[ComfyUI]    0.0 seconds: E:\AIdraw\stable-diffusion-webui\extensions\sd-webui-comfyui\comfyui_custom_nodes\webui_io.py
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\bsz-principled-sdxl.py
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\sdxl_prompt_styler
[ComfyUI]    0.0 seconds: E:\AIdraw\stable-diffusion-webui\extensions\sd-webui-comfyui\comfyui_custom_nodes\webui_proxy_nodes.py
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\ComfyUI_FizzNodes
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved
[ComfyUI]    0.0 seconds: E:\AIdraw\ComfyUI\custom_nodes\Derfuu_ComfyUI_ModdedNodes
[ComfyUI]    0.2 seconds: E:\AIdraw\ComfyUI\custom_nodes\ComfyUI_roop
[ComfyUI]    0.6 seconds: E:\AIdraw\ComfyUI\custom_nodes\ComfyUI-Manager
[ComfyUI]    0.9 seconds: E:\AIdraw\ComfyUI\custom_nodes\comfy_controlnet_preprocessors
[ComfyUI]    1.5 seconds: E:\AIdraw\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
[ComfyUI]    2.6 seconds: E:\AIdraw\ComfyUI\custom_nodes\was-node-suite-comfyui
[ComfyUI]
[ComfyUI] Starting server

[ComfyUI] To see the GUI go to: http://127.0.0.1:8188
ERROR:asyncio:Exception in callback _ProactorBasePipeTransport._call_connection_lost(None)
handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)>
[ComfyUI] Traceback (most recent call last):
[ComfyUI]   File "C:\Users\zhuli\AppData\Local\Programs\Python\Python310\lib\asyncio\events.py", line 80, in _run
    self._context.run(self._callback, *self._args)
[ComfyUI]   File "C:\Users\zhuli\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 165, in _call_connection_lost
    self._sock.shutdown(socket.SHUT_RDWR)
[ComfyUI] ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。
Reusing loaded model style_model\PVC-AO.safetensors [31cc8c676e] to load xytpz_Z\XYTPZZZZZZZ.safetensors [13f2d6e619]
Loading weights [13f2d6e619] from E:\AIdraw\stable-diffusion-webui\models\Stable-diffusion\xytpz_Z\XYTPZZZZZZZ.safetensors
Loading VAE weights specified in settings: E:\AIdraw\stable-diffusion-webui\models\VAE\ClearVAE-Variant.safetensors
Applying attention optimization: xformers... done.
Weights loaded in 9.1s (send model to cpu: 0.3s, load weights from disk: 0.4s, apply weights to model: 7.4s, load VAE: 1.1s).
2023-12-06 23:11:50,828 - AnimateDiff - INFO - Moving motion module to CPU
2023-12-06 23:11:52,385 - AnimateDiff - INFO - Moving motion module to CPU
2023-12-06 23:12:55,262 - AnimateDiff - INFO - AnimateDiff process start.
2023-12-06 23:12:55,264 - AnimateDiff - INFO - Loading motion module animatediffMotion_v15V2.ckpt from E:\AIdraw\stable-diffusion-webui\extensions\sd-webui-animatediff\model\animatediffMotion_v15V2.ckpt
2023-12-06 23:12:57,290 - AnimateDiff - INFO - Guessed animatediffMotion_v15V2.ckpt architecture: MotionModuleType.AnimateDiffV2
2023-12-06 23:13:00,971 - AnimateDiff - WARNING - Missing keys <All keys matched successfully>
2023-12-06 23:13:01,437 - AnimateDiff - INFO - Injecting motion module animatediffMotion_v15V2.ckpt into SD1.5 UNet middle block.
2023-12-06 23:13:01,437 - AnimateDiff - INFO - Injecting motion module animatediffMotion_v15V2.ckpt into SD1.5 UNet input blocks.
2023-12-06 23:13:01,437 - AnimateDiff - INFO - Injecting motion module animatediffMotion_v15V2.ckpt into SD1.5 UNet output blocks.
2023-12-06 23:13:01,438 - AnimateDiff - INFO - Setting DDIM alpha.
2023-12-06 23:13:01,462 - AnimateDiff - INFO - Injection finished.
2023-12-06 23:13:01,463 - AnimateDiff - INFO - Hacking LoRA module to support motion LoRA
2023-12-06 23:13:01,463 - AnimateDiff - INFO - Hacking CFGDenoiser forward function.
2023-12-06 23:13:01,463 - AnimateDiff - INFO - Hacking ControlNet.
2023-12-06 23:13:02,594 - ControlNet - INFO - Loading model: control_v11f1p_sd15_depth [cfd03158]
2023-12-06 23:13:04,038 - ControlNet - INFO - Loaded state_dict from [E:\AIdraw\stable-diffusion-webui\extensions\sd-webui-controlnet\models\control_v11f1p_sd15_depth.pth]
2023-12-06 23:13:04,038 - ControlNet - INFO - controlnet_default_config
2023-12-06 23:13:06,512 - ControlNet - INFO - ControlNet model control_v11f1p_sd15_depth [cfd03158] loaded.
2023-12-06 23:13:06,997 - ControlNet - INFO - Loading preprocessor: depth
2023-12-06 23:13:06,998 - ControlNet - INFO - preprocessor resolution = 512
2023-12-06 23:13:26,166 - ControlNet - INFO - Loading model: control_v11p_sd15_openpose [cab727d4]
2023-12-06 23:13:27,850 - ControlNet - INFO - Loaded state_dict from [E:\AIdraw\stable-diffusion-webui\extensions\sd-webui-controlnet\models\control_v11p_sd15_openpose.pth]
2023-12-06 23:13:27,851 - ControlNet - INFO - controlnet_default_config
2023-12-06 23:13:34,165 - ControlNet - INFO - ControlNet model control_v11p_sd15_openpose [cab727d4] loaded.
2023-12-06 23:13:34,632 - ControlNet - INFO - Loading preprocessor: openpose_full
2023-12-06 23:13:34,633 - ControlNet - INFO - preprocessor resolution = 512
2023-12-06 23:14:40,228 - ControlNet - INFO - ControlNet Hooked - Time = 97.92496037483215
*** Error completing request
*** Arguments: ('task(s66kq46cx992ig6)', '1girl, pom pom hair ornament, solo, hair ornament, virtual youtuber, green eyes, pom pom (clothes), aged down, blonde hair, long sleeves, smile, socks, bandaid on knee, bangs, simple background, full body, v-shaped eyebrows, black background, white socks, child, shoes, randoseru, shirt, bandaid on leg, dress, kneehighs, looking at viewer, white shirt, short hair, bandaid, twintails, backpack, belt, female child, standing, blunt bangs, bag, pinafore dress, >:), skirt', '(verybadimagenegative_v1.3,NG_DeepNegative_V1_75T:1.4), black legwear, black pantyhose, black legwear, black pantyhose, black legwear, black pantyhose, black legwear, black pantyhose, black legwear, black pantyhose, black legwear, black pantyhose, black legwear, black pantyhose, black legwear, black pantyhose, black legwear, black pantyhose', [], 20, 'DPM++ 2M Karras', 1, 1, 6.5, 512, 512, True, 0.5, 1.5, 'lollypop', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001F183E65060>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, True, 960, 64, True, True, True, False, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, <scripts.animatediff_ui.AnimateDiffProcess object at 0x000001F183E5EE90>, True, {'postprocess_txt2img': False, 'postprocess_latent_txt2img': False}, UiControlNetUnit(enabled=True, module='depth_midas', model='control_v11f1p_sd15_depth [cfd03158]', weight=0.75, image=None, resize_mode='Crop and Resize', low_vram=True, processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=True, module='openpose_full', model='control_v11p_sd15_openpose [cab727d4]', weight=0.75, image=None, resize_mode='Crop and Resize', low_vram=True, processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5\n\nBACKGROUND:1,1,1,1,1,1,1,1,0.2,0,0,0.8,1,1,1,0,0\nEARS:1,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, False, None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, True, False, 'CUDA', False, 0, 'None', '', None, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, '从 modules.processing import process_images\n\np.宽度 = 768\np.高度 = 768\np.batch_size = 2\np.steps = 10\n\nreturn process_images(p)', 2, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, None, None, False, 50, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5\n\nBACKGROUND:1,1,1,1,1,1,1,1,0.2,0,0,0.8,1,1,1,0,0\nEARS:1,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, False) {}
    Traceback (most recent call last):
      File "E:\AIdraw\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "E:\AIdraw\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "E:\AIdraw\stable-diffusion-webui\modules\processing.py", line 734, in process_images
        res = process_images_inner(p)
      File "E:\AIdraw\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_cn.py", line 119, in hacked_processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\modules\processing.py", line 869, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "E:\AIdraw\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 420, in process_sample
        return process.sample_before_CN_hack(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\extensions\sd-webui-comfyui\lib_comfyui\webui\patches.py", line 104, in p_sample_patch
        x = original_function(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\modules\processing.py", line 1145, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "E:\AIdraw\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "E:\AIdraw\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "E:\AIdraw\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 269, in mm_cfg_forward
        x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
      File "E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "E:\AIdraw\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1568, in _call_impl
        result = forward_call(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 827, in forward_webui
        raise e
      File "E:\AIdraw\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 824, in forward_webui
        return forward(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 561, in forward
        control = param.control_model(x=x_in, hint=hint, timesteps=timesteps, context=context, y=y)
      File "E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 31, in forward
        return self.control_model(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 300, in forward
        guided_hint = self.input_hint_block(hint, emb, context)
      File "E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 102, in forward
        x = layer(x)
      File "E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AIdraw\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 508, in network_Conv2d_forward
        return originals.Conv2d_forward(self, input)
      File "E:\AIdraw\stable-diffusion-webui\extensions\stable-diffusion-webui-composable-lora\composable_lora.py", line 154, in lora_Conv2d_forward
        return lora_forward(self, input, torch.nn.Conv2d_forward_before_lora(self, input))
      File "E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
        return self._conv_forward(input, self.weight, self.bias)
      File "E:\AIdraw\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
        return F.conv2d(input, weight, bias, self.stride,
    RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

---
zhulinyv commented 7 months ago

Maybe it cost by other extension.

zhulinyv commented 7 months ago

I found a same issue. https://github.com/Mikubill/sd-webui-controlnet/issues/2204

patientx commented 7 months ago

Interestingly I had this same problem months ago with reactor face swapping extension on comfyui, it was happening after codeformer is loaded for face swapping, the solution was ; back then I replaced the codeformer that the extension dev hosted for a different one I downloaded from another place. It was working.

Had the same problem a few days ago , this time just changing scheduler on the ksampler solved!? the issue. (I have no more codeformer files to change in the end :) )

Today just now I'm having this same problem once more. Up until now , I have been experimenting with prompts and reactor -codeformer- was working very well. Now it does not. Comfyui didn't update, reactor sure didn't , that was last updated like 15 days ago. Also I am using the exact same workflow. Even changed it to a most basic one with just reactor added for a 5125x512 render.

Up until now I thought this was about my gpu being amd and this being an issue regarding bad ai support on that but now that I see an nvidia owner having the same problem, I am out of ideas ... Don't know who to ask honestly.