continue-revolution / sd-forge-animatediff

AnimateDiff for Stable Diffusion WebUI Forge, mirror for https://github.com/continue-revolution/sd-webui-animatediff/tree/forge/master
109 stars 8 forks source link

[Bug]: Doesn't work with Controlnet #12

Open DA-Charlie opened 6 months ago

DA-Charlie commented 6 months ago

Is there an existing issue for this?

Have you read FAQ on README?

What happened?

got an error, i think it's related to the path given by animatediff to controlnet

Steps to reproduce the problem

activate animatediff activate controlnet lauch the generation

What should have happened?

proceed the v2v generation

Commit where the problem happens

webui: https://github.com/lllyasviel/stable-diffusion-webui-forge/commit/29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 extension: b20f7519

What browsers do you use to access the UI ?

No response

Command Line Arguments

--always-normal-vram --api --xformers --pin-shared-memory --cuda-malloc --cuda-stream --ckpt-dir C:/Users/Charl/Desktop/python_charlie/A1111/stable-diffusion-webui/models/Stable-diffusion --hypernetwork-dir C:/Users/Charl/Desktop/python_charlie/A1111/stable-diffusion-webui/models/hypernetworks --embeddings-dir C:/Users/Charl/Desktop/python_charlie/A1111/stable-diffusion-webui/embeddings --lora-dir C:/Users/Charl/Desktop/python_charlie/A1111/stable-diffusion-webui/models/Lora

Console logs

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f0.0.17v1.8.0rc-latest-276-g29be1da7
Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
Launching Web UI with arguments: --always-normal-vram --api --xformers --pin-shared-memory --cuda-malloc --cuda-stream --ckpt-dir C:/Users/Charl/Desktop/python_charlie/A1111/stable-diffusion-webui/models/Stable-diffusion --hypernetwork-dir C:/Users/Charl/Desktop/python_charlie/A1111/stable-diffusion-webui/models/hypernetworks --embeddings-dir C:/Users/Charl/Desktop/python_charlie/A1111/stable-diffusion-webui/embeddings --lora-dir C:/Users/Charl/Desktop/python_charlie/A1111/stable-diffusion-webui/models/Lora
Using cudaMallocAsync backend.
Total VRAM 4096 MB, total RAM 40628 MB
WARNING:xformers:A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
xformers version: 0.0.23.post1
Set vram state to: NORMAL_VRAM
Always pin shared GPU memory
Device: cuda:0 NVIDIA GeForce RTX 3050 Ti Laptop GPU : cudaMallocAsync
VAE dtype: torch.bfloat16
CUDA Stream Activated:  True
Using xformers cross attention
ControlNet preprocessor location: C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\models\ControlNetPreprocessor
Civitai Helper: Get Custom Model Folder
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 24.3.0, num models: 17
2024-03-25 17:35:45,273 - AnimateDiff - INFO - AnimateDiff Hooking i2i_batch
Loading weights [879db523c3] from C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\models\Stable-diffusion\dreamshaper_8.safetensors
model_type EPS
UNet ADM Dimension 0
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
To load target model SD1ClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  3289.9630851745605
[Memory Management] Model Memory (MB) =  454.2076225280762
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  1811.7554626464844
Moving model(s) has taken 0.15 seconds
Model loaded in 4.4s (load weights from disk: 0.4s, forge load real models: 2.8s, calculate empty prompt: 1.1s).
2024-03-25 17:35:50,226 - ControlNet - INFO - ControlNet UI callback registered.
Civitai Helper: Set Proxy:
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 29.0s (prepare environment: 6.0s, import torch: 4.8s, import gradio: 1.1s, setup paths: 0.8s, initialize shared: 0.2s, other imports: 1.1s, load scripts: 6.6s, scripts list_optimizers: 0.1s, create ui: 6.6s, gradio launch: 0.4s, add APIs: 1.3s).
2024-03-25 17:37:17,732 - AnimateDiff - INFO - AnimateDiff process start.
2024-03-25 17:37:17,733 - AnimateDiff - INFO - Loading motion module mm_sd_v15_v2.ckpt from C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\extensions\sd-forge-animatediff\model\mm_sd_v15_v2.ckpt
2024-03-25 17:37:19,629 - AnimateDiff - INFO - Guessed mm_sd_v15_v2.ckpt architecture: MotionModuleType.AnimateDiffV2
2024-03-25 17:37:20,277 - AnimateDiff - INFO - Attempting to extract frames via ffmpeg from C:\Users\Charl\AppData\Local\Temp\gradio\4df958abb9aaf3af2ad423822617566255a91c0b\openpose_result.mp4 to GIF\openpose_result-2f9ba7c4
ffmpeg version 6.0-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers
  built with gcc 12.2.0 (Rev10, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
  libavutil      58.  2.100 / 58.  2.100
  libavcodec     60.  3.100 / 60.  3.100
  libavformat    60.  3.100 / 60.  3.100
  libavdevice    60.  1.100 / 60.  1.100
  libavfilter     9.  3.100 /  9.  3.100
  libswscale      7.  1.100 /  7.  1.100
  libswresample   4. 10.100 /  4. 10.100
  libpostproc    57.  1.100 / 57.  1.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C:\Users\Charl\AppData\Local\Temp\gradio\4df958abb9aaf3af2ad423822617566255a91c0b\openpose_result.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.29.100
  Duration: 00:00:02.53, start: 0.000000, bitrate: 1621 kb/s
  Stream #0:0[0x1](und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1080x2048, 1616 kb/s, 30 fps, 30 tbr, 15360 tbn (default)
    Metadata:
      handler_name    : VideoHandler
      vendor_id       : [0][0][0][0]
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))
Press [q] to stop, [?] for help
[swscaler @ 000001e2e2f0cf80] deprecated pixel format used, make sure you did set range correctly
Output #0, image2, to 'GIF\openpose_result-2f9ba7c4\%09d.jpg':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf60.3.100
  Stream #0:0(und): Video: mjpeg, yuvj420p(pc, progressive), 1080x2048, q=2-31, 200 kb/s, 30 fps, 30 tbn (default)
    Metadata:
      handler_name    : VideoHandler
      vendor_id       : [0][0][0][0]
      encoder         : Lavc60.3.100 mjpeg
    Side data:
      cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: N/A
frame=   39 fps=0.0 q=1.0 Lsize=N/A time=00:00:01.26 bitrate=N/A speed=4.15x    its/s speed=N/A
video:3396kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
2024-03-25 17:37:20,754 - AnimateDiff - INFO - AnimateDiff + ControlNet will generate 39 frames.
2024-03-25 17:37:20,788 - ControlNet - INFO - ControlNet Input Mode: InputMode.BATCH
2024-03-25 17:37:20,795 - ControlNet - INFO - Try to read image: GIF\openpose_result-2f9ba7c4\GIF\openpose_result-2f9ba7c4\000000001.jpg
[ WARN:0@103.326] global D:\a\opencv-python\opencv-python\opencv\modules\imgcodecs\src\loadsave.cpp (239) cv::findDecoder imread_('GIF\openpose_result-2f9ba7c4\GIF\openpose_result-2f9ba7c4\000000001.jpg'): can't open/read file: check file path/integrity
*** Error running process: C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py
    Traceback (most recent call last):
      File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\modules\scripts.py", line 803, in process
        script.process(p, *script_args)
      File "C:\Users\Charl\AppData\Roaming\Python\Python310\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py", line 560, in process
        self.process_unit_after_click_generate(p, unit, params, *args, **kwargs)
      File "C:\Users\Charl\AppData\Roaming\Python\Python310\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py", line 291, in process_unit_after_click_generate
        input_list, resize_mode = self.get_input_data(p, unit, preprocessor)
      File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py", line 194, in get_input_data
        img = np.ascontiguousarray(cv2.imread(img_path)[:, :, ::-1]).copy()
    TypeError: 'NoneType' object is not subscriptable

---
[LORA] Loaded C:\Users\Charl\Desktop\python_charlie\A1111\stable-diffusion-webui\models\Lora\LCM_LoRA_Weights_SD15.safetensors for BaseModel-UNet with 278 keys at weight 0.8 (skipped 0 keys)
[LORA] Loaded C:\Users\Charl\Desktop\python_charlie\A1111\stable-diffusion-webui\models\Lora\perfect hands_1.5.safetensors for BaseModel-UNet with 192 keys at weight 1.0 (skipped 0 keys)
[LORA] Loaded C:\Users\Charl\Desktop\python_charlie\A1111\stable-diffusion-webui\models\Lora\perfect hands_1.5.safetensors for BaseModel-CLIP with 72 keys at weight 1.0 (skipped 0 keys)
2024-03-25 17:37:21,104 - AnimateDiff - INFO - Setting DDIM alpha.
To load target model SD1ClipModel
Begin to load 1 model
Reuse 1 loaded models
[Memory Management] Current Free GPU Memory (MB) =  2950.0511264801025
[Memory Management] Model Memory (MB) =  0.0
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  1926.0511264801025
Moving model(s) has taken 0.30 seconds
*** Error running process_before_every_sampling: C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py
    Traceback (most recent call last):
      File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\modules\scripts.py", line 835, in process_before_every_sampling
        script.process_before_every_sampling(p, *script_args, **kwargs)
      File "C:\Users\Charl\AppData\Roaming\Python\Python310\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py", line 567, in process_before_every_sampling
        self.process_unit_before_every_sampling(p, unit, self.current_params[i], *args, **kwargs)
    KeyError: 0

---
2024-03-25 17:37:21,626 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet.
To load target model BaseModel
To load target model MotionWrapper
Begin to load 2 models
[Memory Management] Current Free GPU Memory (MB) =  3255.9677600860596
[Memory Management] Model Memory (MB) =  1639.406135559082
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  592.5616245269775
[Memory Management] Current Free GPU Memory (MB) =  1616.5616245269775
[Memory Management] Model Memory (MB) =  866.7327880859375
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  -274.17116355895996
[Memory Management] Requested ASYNC Preserved Memory (MB) =  455.8166341781616
[Memory Management] Parameters Loaded to ASYNC Stream (MB) =  408.629150390625
[Memory Management] Parameters Loaded to GPU (MB) =  455.7989501953125
Moving model(s) has taken 1.81 seconds
 70%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▊                                                                                  | 7/10 [02:04<00:53, 17.76s/it]
Traceback (most recent call last):███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▌                                                                             | 7/10 [01:38<00:47, 15.68s/it]
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\modules\txt2img.py", line 111, in txt2img_function
    processed = processing.process_images(p)
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\modules\processing.py", line 752, in process_images
    res = process_images_inner(p)
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\modules\processing.py", line 922, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\modules\processing.py", line 1275, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\modules\sd_samplers_kdiffusion.py", line 252, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\modules\sd_samplers_common.py", line 263, in launch_sampling
    return func()
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\modules\sd_samplers_kdiffusion.py", line 252, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\modules\sd_samplers_lcm.py", line 72, in sample_lcm
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "C:\Users\Charl\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Charl\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\modules\sd_samplers_cfg_denoiser.py", line 182, in forward
    denoised = forge_sampler.forge_sample(self, denoiser_params=denoiser_params,
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\modules_forge\forge_sampler.py", line 88, in forge_sample
    denoised = sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options, seed)
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\ldm_patched\modules\samplers.py", line 289, in sampling_function
    cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\ldm_patched\modules\samplers.py", line 256, in calc_cond_uncond_batch
    output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep_, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\extensions\sd-forge-animatediff\scripts\animatediff_infv2v.py", line 146, in mm_sd_forward
    out = apply_model(info["input"][_context], info["timestep"][_context], **info_c)
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\ldm_patched\modules\model_base.py", line 90, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "C:\Users\Charl\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Charl\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 915, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, output_shape, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 55, in forward_timestep_embed
    x = layer(x, context, transformer_options)
  File "C:\Users\Charl\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Charl\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\ldm_patched\ldm\modules\attention.py", line 632, in forward
    x = block(x, context=context[i], transformer_options=transformer_options)
  File "C:\Users\Charl\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Charl\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\ldm_patched\ldm\modules\attention.py", line 459, in forward
    return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\ldm_patched\ldm\modules\diffusionmodules\util.py", line 194, in checkpoint
    return func(*inputs)
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\ldm_patched\ldm\modules\attention.py", line 569, in _forward
    x = self.ff(self.norm3(x))
  File "C:\Users\Charl\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Charl\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\ldm_patched\ldm\modules\attention.py", line 91, in forward
    return self.net(x)
  File "C:\Users\Charl\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Charl\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\Charl\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\container.py", line 215, in forward
    input = module(input)
  File "C:\Users\Charl\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Charl\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\ldm_patched\ldm\modules\attention.py", line 71, in forward
    return x * F.gelu(gate)
torch.cuda.OutOfMemoryError: Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated     : 3.16 GiB
Requested               : 187.50 MiB
Device limit            : 4.00 GiB
Free (according to CUDA): 0 bytes
PyTorch limit (set by user-supplied memory fraction)
                        : 17179869184.00 GiB
Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated     : 3.16 GiB
Requested               : 187.50 MiB
Device limit            : 4.00 GiB
Free (according to CUDA): 0 bytes
PyTorch limit (set by user-supplied memory fraction)
                        : 17179869184.00 GiB
*** Error completing request
*** Arguments: ('task(3hjpssy710w6slo)', <gradio.routes.Request object at 0x0000018A253C92A0>, '1boy, batman, the dark knight, on the street, moon light, street light, night, detailed, realistic, full body, <lora:LCM_LoRA_Weights_SD15:0.8>, <lora:perfect hands:1> ', 'text,logo,Watermark,Copyright,lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract], logo, watermark,', ['Futuristic', 'Lighting: Natural', 'Skin Enhancer (clean)'], 10, 'LCM', 1, 1, 2, 768, 405, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, ControlNetUnit(input_mode=<InputMode.BATCH: 'batch'>, use_preview_as_input=False, batch_image_dir='GIF\\openpose_result-2f9ba7c4', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], multi_inputs_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=True, module='None', model='control_v11p_sd15_openpose [cab727d4]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=0.5, threshold_b=0.5, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='ControlNet is more important', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], multi_inputs_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], multi_inputs_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], multi_inputs_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], multi_inputs_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000018A2633E1A0>, False, '(SDXL) Only Generate Transparent Image (Attention Injection)', 1, 1, None, None, None, 'Crop and Resize', False, '', '', '', False, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', [False], '0', '0', '0.4', None, '0', '0', False, False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, [], 30, '', 4, [], 1, '', '', '', '') {}
    Traceback (most recent call last):
      File "C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable

---

Additional information

No response

DA-Charlie commented 6 months ago

i understand the cuda error after, i'll change my cli arguments, but still i think there is an issue with animatediff, if i find something i'll report here.. i understand you guys are busy