continue-revolution / sd-webui-animatediff

AnimateDiff for AUTOMATIC1111 Stable Diffusion WebUI
Other
3.11k stars 258 forks source link

[Bug]: Error completing request when trying VIDEO INPUT of 25 frames #342

Closed LIQUIDMIND111 closed 11 months ago

LIQUIDMIND111 commented 11 months ago

Is there an existing issue for this?

Have you read FAQ on README?

What happened?

When using regularly, i have no issues. But when trying a VIDEO INPUT with CONTROL NET tile model, i get this error:

Steps to reproduce the problem

  1. Go to .... animatediff
  2. Press .... video input 512x512
  3. ... control net TILE MODEL 1.5
  4. Generate
  5. it will do all steps but then will get a NAN error.

What should have happened?

Generate a regular video.

Commit where the problem happens

webui: version: [v1.6.0] •  python: 3.10.9  •  torch: 2.0.1+cu118  •  xformers: 0.0.20  •  gradio: 3.41.2  •  checkpoint: [399a00a7b5]

animatediff 1.12.1

What browsers do you use to access the UI ?

CHROME

Command Line Arguments

--xformers

Console logs

tensor with all NaNs was produced in VAE.
Web UI will now convert VAE into 32-bit float and retry.
To disable this behavior, disable the 'Automatically revert VAE to 32-bit floats' setting.
To always start with 32-bit VAE, use --no-half-vae commandline flag.
==========================================================================================
*** Error completing request
*** Arguments: ('task(hvnz9s8yweri5ot)', 'A robot, futuristic look, mechanical, glossy, shiny metals, cinematic lighting <lora:1.5LCM:1>', '', ['Great Negative 2'], 10, 'LCM', 1, 1, 2, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000015008F3F160>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000015008F3CCD0>, UiControlNetUnit(enabled=True, module='none', model='control_v11f1e_sd15_tile [a371b31b]', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), '<p>Running in txt2img mode:<br><br>Render these video formats:</p>', '<p style="margin-bottom:0.75em">Animation Parameters</p>', '<p style="margin-bottom:0.75em">Initial Parameters</p>', '<p style="margin-bottom:0.75em">Prompt Template, applied to each keyframe below</p>', '<p style="margin-bottom:0.75em">Props, Stamps</p>', '<p>Supported Keyframes:<br>time_s | source | video, images, img2img | path<br>time_s | prompt | positive_prompts | negative_prompts<br>time_s | template | positive_prompts | negative_prompts<br>time_s | prompt_from_png | file_path<br>time_s | prompt_vtt | vtt_filepath<br>time_s | transform | zoom | x_shift | y_shift | rotation<br>time_s | seed | new_seed_int<br>time_s | noise | added_noise_strength<br>time_s | denoise | denoise_value<br>time_s | cfg_scale | cfg_scale_value<br>time_s | set_text | textblock_name | text_prompt | x | y | w | h | fore_color | back_color | font_name<br>time_s | clear_text | textblock_name<br>time_s | prop | prop_name | prop_filename | x pos | y pos | scale | rotation<br>time_s | set_stamp | stamp_name | stamp_filename | x pos | y pos | scale | rotation<br>time_s | clear_stamp | stamp_name<br>time_s | col_set<br>time_s | col_clear<br>time_s | model | LCM_Dreamshaper_v7_4k, epicphotogasm_v31Getwhatyouprompt</p>', '10.0', '15', False, False, True, '1.0', '', '', '', 0.4, '0', '0', '0', '', False, 0, False, 0.1, False, '<p>Running in txt2img mode:<br><br>Render these video formats:</p>', '<p style="margin-bottom:0.75em">Animation Parameters</p>', '<p style="margin-bottom:0.75em">Initial Parameters</p>', '<p style="margin-bottom:0.75em">Prompt Template, applied to each keyframe below</p>', '<p style="margin-bottom:0.75em">Props, Stamps</p>', '<p>Supported Keyframes:<br>time_s | source | video, images, img2img | path<br>time_s | prompt | positive_prompts | negative_prompts<br>time_s | template | positive_prompts | negative_prompts<br>time_s | prompt_from_png | file_path<br>time_s | prompt_vtt | vtt_filepath<br>time_s | transform | zoom | x_shift | y_shift | rotation<br>time_s | seed | new_seed_int<br>time_s | noise | added_noise_strength<br>time_s | denoise | denoise_value<br>time_s | cfg_scale | cfg_scale_value<br>time_s | set_text | textblock_name | text_prompt | x | y | w | h | fore_color | back_color | font_name<br>time_s | clear_text | textblock_name<br>time_s | prop | prop_name | prop_filename | x pos | y pos | scale | rotation<br>time_s | set_stamp | stamp_name | stamp_filename | x pos | y pos | scale | rotation<br>time_s | clear_stamp | stamp_name<br>time_s | col_set<br>time_s | col_clear<br>time_s | model | LCM_Dreamshaper_v7_4k, epicphotogasm_v31Getwhatyouprompt</p>', '10.0', '15', False, False, True, '1.0', '', '', '', 0.4, '0', '0', '0', '', False, 0, False, 0.1, False, '', 1, True, 100, False, False, 'positive', 'comma', 0, False, False, '', 2, False, 4.0, '', 10.0, False, False, True, 30.0, True, False, False, 0, 0.0, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, 'linear (weight sum)', '10', 'C:\\TEMP\\StableDiff2\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-prompt-travel\\img\\ref_ctrlnet', 'Lanczos', 2, 0, 0, 'mp4', 10.0, 0, '', True, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, 'linear', 'lerp', 'token', 'random', '30', 'fixed', 1, '8', None, 'Lanczos', 2, 0, 0, 'mp4', 10.0, 0, '', True, False, False, 0, 0, 0.0001, 75, 0) {}
    Traceback (most recent call last):
      File "C:\TEMP\StableDiff2\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\TEMP\StableDiff2\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\TEMP\StableDiff2\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "C:\TEMP\StableDiff2\stable-diffusion-webui\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "C:\TEMP\StableDiff2\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_cn.py", line 118, in hacked_processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "C:\TEMP\StableDiff2\stable-diffusion-webui\modules\processing.py", line 875, in process_images_inner
        x_samples_ddim = decode_latent_batch(p.sd_model, samples_ddim, target_device=devices.cpu, check_for_nans=True)
      File "C:\TEMP\StableDiff2\stable-diffusion-webui\modules\processing.py", line 601, in decode_latent_batch
        raise e
      File "C:\TEMP\StableDiff2\stable-diffusion-webui\modules\processing.py", line 598, in decode_latent_batch
        devices.test_for_nans(sample, "vae")
      File "C:\TEMP\StableDiff2\stable-diffusion-webui\modules\devices.py", line 136, in test_for_nans
        raise NansException(message)
    modules.devices.NansException: A tensor with all NaNs was produced in VAE. This could be because there's not enough precision to represent the picture. Try adding --no-half-vae commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

Additional information

IT WILL DO ALL THE STEPS, but then get a NAN error:

THIS IS BEFORE THE ERROR:

2023-11-23 23:57:46,961 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.safetensors into SD1.5 UNet middle block. 2023-11-23 23:57:46,961 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.safetensors into SD1.5 UNet input blocks. 2023-11-23 23:57:46,962 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.safetensors into SD1.5 UNet output blocks. 2023-11-23 23:57:46,963 - AnimateDiff - INFO - Setting DDIM alpha. 2023-11-23 23:57:46,965 - AnimateDiff - INFO - Injection finished. 2023-11-23 23:57:46,966 - AnimateDiff - INFO - Hacking loral to support motion lora 2023-11-23 23:57:46,966 - AnimateDiff - INFO - Hacking CFGDenoiser forward function. 2023-11-23 23:57:46,966 - AnimateDiff - INFO - Hacking ControlNet. 2023-11-23 23:57:47,450 - ControlNet - INFO - Loading model: control_v11f1e_sd15_tile [a371b31b] 2023-11-23 23:58:02,285 - ControlNet - INFO - Loaded state_dict from [C:\TEMP\StableDiff2\stable-diffusion-webui\extensions\sd-webui-controlnet\models\control_v11f1e_sd15_tile.pth] 2023-11-23 23:58:02,286 - ControlNet - INFO - controlnet_default_config 2023-11-23 23:58:04,562 - ControlNet - INFO - ControlNet model control_v11f1e_sd15_tile [a371b31b] loaded. 2023-11-23 23:58:04,801 - ControlNet - INFO - Loading preprocessor: none 2023-11-23 23:58:04,843 - ControlNet - INFO - preprocessor resolution = 512 2023-11-23 23:58:05,190 - ControlNet - INFO - ControlNet Hooked - Time = 17.90124773979187 100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [15:48<00:00, 94.86s/it]

No response

continue-revolution commented 11 months ago

please always add --no-half-vae to your command line arguments. I always do this. If you meet furthur error, post here and re-open this issue.

LIQUIDMIND111 commented 11 months ago

please always add --no-half-vae to your command line arguments. I always do this. If you meet furthur error, post here and re-open this issue.

ok i have tried that --no half vae, but then i get CUDA memory error on my 6GB VRAM GPU......