continue-revolution / sd-webui-animatediff

AnimateDiff for AUTOMATIC1111 Stable Diffusion WebUI
Other
3.11k stars 258 forks source link

[Bug]: 'NoneType' object is not iterable #487

Open INTstinkt opened 7 months ago

INTstinkt commented 7 months ago

Is there an existing issue for this?

Have you read FAQ on README?

What happened?

Whenever i try to use AnimateDiff i just get this Error

Steps to reproduce the problem

Activate Animate Diff

What should have happened?

Create a Gif

Commit where the problem happens

version: f0.0.17v1.8.0rc-latest-276-g29be1da7  •  python: 3.10.6  •  torch: 2.1.2+cu121  •  xformers: N/A  •  gradio: 3.41.2  •  checkpoint: [fa1224c923]

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

No

Console logs

To load target model BaseModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  22022.3662109375
[Memory Management] Model Memory (MB) =  1639.4137649536133
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  19358.952445983887
Moving model(s) has taken 0.25 seconds
  0%|                                                                                                            | 0/20 [00:00<?, ?it/s]2024-03-26 23:29:24,919 - AnimateDiff - WARNING - No motion module detected, falling back to the original forward. You are most likely using !Adetailer. !Adetailer post-process your outputs sequentially, and there will NOT be motion module in your UNet, so there might be NO temporal consistency within the inpainted face. Use at your own risk. If you really want to pursue inpainting with AnimateDiff inserted into UNet, use Segment Anything to generate masks for each frame and inpaint them with AnimateDiff + ControlNet. Note that my proposal might be good or bad, do your own research to figure out the best way.
  0%|                                                                                                            | 0/20 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "C:\AI\Webui-Forge\system\python\lib\site-packages\einops\einops.py", line 410, in reduce
    return _apply_recipe(recipe, tensor, reduction_type=reduction)
  File "C:\AI\Webui-Forge\system\python\lib\site-packages\einops\einops.py", line 233, in _apply_recipe
    _reconstruct_from_shape(recipe, backend.shape(tensor))
  File "C:\AI\Webui-Forge\system\python\lib\site-packages\einops\einops.py", line 198, in _reconstruct_from_shape_uncached
    raise EinopsError("Shape mismatch, can't divide axis of length {} in chunks of {}".format(
einops.EinopsError: Shape mismatch, can't divide axis of length 2 in chunks of 16

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\AI\Webui-Forge\webui\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "C:\AI\Webui-Forge\webui\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "C:\AI\Webui-Forge\webui\modules\txt2img.py", line 111, in txt2img_function
    processed = processing.process_images(p)
  File "C:\AI\Webui-Forge\webui\modules\processing.py", line 752, in process_images
    res = process_images_inner(p)
  File "C:\AI\Webui-Forge\webui\modules\processing.py", line 922, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "C:\AI\Webui-Forge\webui\modules\processing.py", line 1275, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "C:\AI\Webui-Forge\webui\modules\sd_samplers_kdiffusion.py", line 251, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "C:\AI\Webui-Forge\webui\modules\sd_samplers_common.py", line 263, in launch_sampling
    return func()
  File "C:\AI\Webui-Forge\webui\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "C:\AI\Webui-Forge\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\AI\Webui-Forge\webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "C:\AI\Webui-Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\AI\Webui-Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\AI\Webui-Forge\webui\modules\sd_samplers_cfg_denoiser.py", line 182, in forward
    denoised = forge_sampler.forge_sample(self, denoiser_params=denoiser_params,
  File "C:\AI\Webui-Forge\webui\modules_forge\forge_sampler.py", line 88, in forge_sample
    denoised = sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options, seed)
  File "C:\AI\Webui-Forge\webui\ldm_patched\modules\samplers.py", line 289, in sampling_function
    cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
  File "C:\AI\Webui-Forge\webui\ldm_patched\modules\samplers.py", line 258, in calc_cond_uncond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
  File "C:\AI\Webui-Forge\webui\ldm_patched\modules\model_base.py", line 90, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "C:\AI\Webui-Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\AI\Webui-Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\AI\Webui-Forge\webui\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 867, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
  File "C:\AI\Webui-Forge\webui\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 61, in forward_timestep_embed
    x = layer(x)
  File "C:\AI\Webui-Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\AI\Webui-Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\AI\Webui-Forge\webui\extensions\sd-webui-animatediff\motion_module.py", line 136, in forward
    return self.temporal_transformer(x)
  File "C:\AI\Webui-Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\AI\Webui-Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\AI\Webui-Forge\webui\extensions\sd-webui-animatediff\motion_module.py", line 194, in forward
    hidden_states = block(hidden_states)
  File "C:\AI\Webui-Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\AI\Webui-Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\AI\Webui-Forge\webui\extensions\sd-webui-animatediff\motion_module.py", line 248, in forward
    hidden_states = attention_block(norm_hidden_states) + hidden_states
  File "C:\AI\Webui-Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\AI\Webui-Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\AI\Webui-Forge\webui\extensions\sd-webui-animatediff\motion_module.py", line 337, in forward
    x = rearrange(x, "(b f) d c -> (b d) f c", f=video_length)
  File "C:\AI\Webui-Forge\system\python\lib\site-packages\einops\einops.py", line 487, in rearrange
    return reduce(tensor, pattern, reduction='rearrange', **axes_lengths)
  File "C:\AI\Webui-Forge\system\python\lib\site-packages\einops\einops.py", line 418, in reduce
    raise EinopsError(message + '\n {}'.format(e))
einops.EinopsError:  Error while processing rearrange-reduction pattern "(b f) d c -> (b d) f c".
 Input tensor shape: torch.Size([2, 4096, 320]). Additional info: {'f': 16}.
 Shape mismatch, can't divide axis of length 2 in chunks of 16
 Error while processing rearrange-reduction pattern "(b f) d c -> (b d) f c".
 Input tensor shape: torch.Size([2, 4096, 320]). Additional info: {'f': 16}.
 Shape mismatch, can't divide axis of length 2 in chunks of 16
*** Error completing request
*** Arguments: ('task(awpc0lakczh1udl)', <gradio.routes.Request object at 0x0000020CBBF953F0>, '', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, '(SDXL) Only Generate Transparent Image (Attention Injection)', 1, 1, None, None, None, 'Crop and Resize', False, '', '', '', <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000020CBA8E1750>, False, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, False, -1, -1, 0, '1,1', 'Horizontal', '', 2, 1, False, '1.5', 0, False, 0.01, 0.5, -0.13, 0, 0, 0, 0, 0, False, 'Default', 'Default', 1, False, 0, False, 0, 0, 0, 0, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 0.5, 2, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, 5, 'all', 'all', 'all', '', '', '', '1', 'none', False, '', '', 'comma', '', True, '', '20', 'all', 'all', 'all', 'all', 0, '', True, '0', False, 'SDXL', 'Standard', 'dynamic', True, False, 0, 'Range', 1, 'GPU', True, False, False, False, False, False, 0, 448, False, 448, False, False, 3, False, 3, True, 3, False, 'Horizontal', False, False, 'u2net', False, True, True, False, 0, 2.5, 'polylines_sharp', ['left-right', 'red-cyan-anaglyph'], 2, 0, '∯boost∯clipdepth∯clipdepth_far∯clipdepth_mode∯clipdepth_near∯compute_device∯do_output_depth∯gen_heatmap∯gen_normalmap∯gen_rembg∯gen_simple_mesh∯gen_stereo∯model_type∯net_height∯net_size_match∯net_width∯normalmap_invert∯normalmap_post_blur∯normalmap_post_blur_kernel∯normalmap_pre_blur∯normalmap_pre_blur_kernel∯normalmap_sobel∯normalmap_sobel_kernel∯output_depth_combine∯output_depth_combine_axis∯output_depth_invert∯pre_depth_background_removal∯rembg_model∯save_background_removal_masks∯save_outputs∯simple_mesh_occlude∯simple_mesh_spherical∯stereo_balance∯stereo_divergence∯stereo_fill_algo∯stereo_modes∯stereo_offset_exponent∯stereo_separation', 'Positive', 0, ', ', 'Generate and always save', 32) {}
    Traceback (most recent call last):
      File "C:\AI\Webui-Forge\webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable

Additional information

No response

codingbee1994 commented 6 months ago

hi, i hope you fix that error. i was in same error. but i fix. in my case, i'm webui - forge user but i installed 'animated Diff' for webui not for forge in extentions tab. so i reinstalled correct 'animated Diff' for webui-forge. 2nd, change the model file directory for importing 'animated Diff'. webui (extensions\sd-webui-animatediff\model>>>>extensions\sd-forge-animatediff\model)

i'm not sure you got fix this error, but it can help someone take same error. so i leave comment. have a good day !

P.S. i'm not good at english. so plz understand me. i have no idea to bed thing