continue-revolution / sd-webui-animatediff

AnimateDiff for AUTOMATIC1111 Stable Diffusion WebUI
Other
3.05k stars 253 forks source link

[Bug]: TypeError: 'NoneType' object is not iterable #442

Closed iamsuper123 closed 6 months ago

iamsuper123 commented 6 months ago

Is there an existing issue for this?

Have you read FAQ on README?

What happened?

the animation didn't generate

Steps to reproduce the problem

  1. tick enable animatediff
  2. generate image

What should have happened?

animation should've generated

Commit where the problem happens

webui: sd forge extension: animatediff

What browsers do you use to access the UI ?

No response

Command Line Arguments

--ckpt-dir %A1111_HOME%/models/Stable-diffusion --hypernetwork-dir %A1111_HOME%/models/hypernetworks --embeddings-dir %A1111_HOME%/embeddings --lora-dir %A1111_HOME%/models/Lora

Console logs

*** Error running process: C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py
    Traceback (most recent call last):
      File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\modules\scripts.py", line 803, in process
        script.process(p, *script_args)
      File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py", line 548, in process
        self.process_unit_after_click_generate(p, unit, params, *args, **kwargs)
      File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py", line 405, in process_unit_after_click_generate
        assert unit.model != 'None', 'You have not selected any control model!'
    AssertionError: You have not selected any control model!

---
*** Error running process_before_every_sampling: C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py
    Traceback (most recent call last):
      File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\modules\scripts.py", line 835, in process_before_every_sampling
        script.process_before_every_sampling(p, *script_args, **kwargs)
      File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py", line 555, in process_before_every_sampling
        self.process_unit_before_every_sampling(p, unit, self.current_params[i], *args, **kwargs)
    KeyError: 0

---
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]*** Error executing callback cfg_denoiser_callback for C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\extensions\sd-webui-animatediff\scripts\animatediff.py
    Traceback (most recent call last):
      File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\modules\script_callbacks.py", line 233, in cfg_denoiser_callback
        c.callback(params)
      File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 91, in animatediff_on_cfg_denoiser
        ad_params.text_cond = ad_params.prompt_scheduler.multi_cond(cfg_params.text_cond, prompt_closed_loop)
    AttributeError: 'NoneType' object has no attribute 'multi_cond'

---
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\einops\einops.py", line 410, in reduce
    return _apply_recipe(recipe, tensor, reduction_type=reduction)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\einops\einops.py", line 233, in _apply_recipe
    _reconstruct_from_shape(recipe, backend.shape(tensor))
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\einops\einops.py", line 198, in _reconstruct_from_shape_uncached
    raise EinopsError("Shape mismatch, can't divide axis of length {} in chunks of {}".format(
einops.EinopsError: Shape mismatch, can't divide axis of length 2 in chunks of 16

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\modules\txt2img.py", line 111, in txt2img_function
    processed = processing.process_images(p)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\modules\processing.py", line 750, in process_images
    res = process_images_inner(p)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\modules\processing.py", line 921, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\modules\processing.py", line 1276, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\modules\sd_samplers_kdiffusion.py", line 251, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\modules\sd_samplers_common.py", line 263, in launch_sampling
    return func()
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\modules\sd_samplers_cfg_denoiser.py", line 182, in forward
    denoised = forge_sampler.forge_sample(self, denoiser_params=denoiser_params,
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\modules_forge\forge_sampler.py", line 82, in forge_sample
    denoised = sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options, seed)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\ldm_patched\modules\samplers.py", line 289, in sampling_function
    cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\ldm_patched\modules\samplers.py", line 258, in calc_cond_uncond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\ldm_patched\modules\model_base.py", line 89, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 867, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 61, in forward_timestep_embed
    x = layer(x)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\extensions\sd-webui-animatediff\motion_module.py", line 132, in forward
    return self.temporal_transformer(x)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\extensions\sd-webui-animatediff\motion_module.py", line 190, in forward
    hidden_states = block(hidden_states)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\extensions\sd-webui-animatediff\motion_module.py", line 244, in forward
    hidden_states = attention_block(norm_hidden_states) + hidden_states
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\extensions\sd-webui-animatediff\motion_module.py", line 333, in forward
    x = rearrange(x, "(b f) d c -> (b d) f c", f=video_length)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\einops\einops.py", line 487, in rearrange
    return reduce(tensor, pattern, reduction='rearrange', **axes_lengths)
  File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\system\python\lib\site-packages\einops\einops.py", line 418, in reduce
    raise EinopsError(message + '\n {}'.format(e))
einops.EinopsError:  Error while processing rearrange-reduction pattern "(b f) d c -> (b d) f c".
 Input tensor shape: torch.Size([2, 4096, 320]). Additional info: {'f': 16}.
 Shape mismatch, can't divide axis of length 2 in chunks of 16
 Error while processing rearrange-reduction pattern "(b f) d c -> (b d) f c".
 Input tensor shape: torch.Size([2, 4096, 320]). Additional info: {'f': 16}.
 Shape mismatch, can't divide axis of length 2 in chunks of 16
*** Error completing request
*** Arguments: ('task(qcs41f9trqjeqlu)', <gradio.routes.Request object at 0x000001CA22A98DF0>, '<lora:fashion_mix-10:1>, full body pic, crop top, short shorts, leg warmers, mirror selfie', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.animatediff_ui.AnimateDiffProcess object at 0x000001CA22AFB5E0>, ControlNetUnit(input_mode=<InputMode.MERGE: 'merge'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[{'name': 'C:\\Users\\alienware\\AppData\\Local\\Temp\\gradio\\f1e1aaae350a96c2b8e3b28fcc1cdbd0454901dd\\SaveTik.co_7256091775280991514000000022.jpg', 'data': 'http://127.0.0.1:7860/file=C:\\Users\\alienware\\AppData\\Local\\Temp\\gradio\\f1e1aaae350a96c2b8e3b28fcc1cdbd0454901dd\\SaveTik.co_7256091775280991514000000022.jpg', 'is_file': True}, {'name': 'C:\\Users\\alienware\\AppData\\Local\\Temp\\gradio\\13ed33cb00bacf5f839d4546bd0f46ce37ce6d52\\SaveTik.co_7256091775280991514000000000.jpg', 'data': 'http://127.0.0.1:7860/file=C:\\Users\\alienware\\AppData\\Local\\Temp\\gradio\\13ed33cb00bacf5f839d4546bd0f46ce37ce6d52\\SaveTik.co_7256091775280991514000000000.jpg', 'is_file': True}, {'name': 'C:\\Users\\alienware\\AppData\\Local\\Temp\\gradio\\462699e1ca5c1d08bc5d3b8dc44790074dc08701\\SaveTik.co_7256091775280991514000000063.jpg', 'data': 'http://127.0.0.1:7860/file=C:\\Users\\alienware\\AppData\\Local\\Temp\\gradio\\462699e1ca5c1d08bc5d3b8dc44790074dc08701\\SaveTik.co_7256091775280991514000000063.jpg', 'is_file': True}, {'name': 'C:\\Users\\alienware\\AppData\\Local\\Temp\\gradio\\329e5787dc1d3f1f459ed0465226e2c13821211c\\SaveTik.co_7256091775280991514000000048.jpg', 'data': 'http://127.0.0.1:7860/file=C:\\Users\\alienware\\AppData\\Local\\Temp\\gradio\\329e5787dc1d3f1f459ed0465226e2c13821211c\\SaveTik.co_7256091775280991514000000048.jpg', 'is_file': True}, {'name': 'C:\\Users\\alienware\\AppData\\Local\\Temp\\gradio\\2707a027fbc09fc8453dc9ab9521819e4e2c4119\\SaveTik.co_7256091775280991514000000033.jpg', 'data': 'http://127.0.0.1:7860/file=C:\\Users\\alienware\\AppData\\Local\\Temp\\gradio\\2707a027fbc09fc8453dc9ab9521819e4e2c4119\\SaveTik.co_7256091775280991514000000033.jpg', 'is_file': True}, {'name': 'C:\\Users\\alienware\\AppData\\Local\\Temp\\gradio\\2a98b9d0aee2d8ec205734bd5c3f9d5c9bf76b09\\SaveTik.co_7256091775280991514000000013.jpg', 'data': 'http://127.0.0.1:7860/file=C:\\Users\\alienware\\AppData\\Local\\Temp\\gradio\\2a98b9d0aee2d8ec205734bd5c3f9d5c9bf76b09\\SaveTik.co_7256091775280991514000000013.jpg', 'is_file': True}], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=True, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=0.5, threshold_b=0.5, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\Users\alienware\Documents\webui_forge_cu121_torch21\webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable

Additional information

I'm using sd forge. every time I try to generate an image using animatediff, it doesn't generate, and it says "type error, nonetype is not iterable" the only way for me to generate anything at all is to disable the extension completely

continue-revolution commented 6 months ago

did you checkout forge/master branch?

iamsuper123 commented 6 months ago

can you be more specific? what does checkout mean

BojanRBB commented 6 months ago

@iamsuper123 I had same problem, here is a solution:

  1. Go to Extrension > Install from URL and install from https://github.com/continue-revolution/sd-forge-animatediff
  2. restart
  3. now you should have this 2 extension installed: sd-forge-animatediff | https://github.com/continue-revolution/sd-forge-animatediff | forge-master sd-webui-animatediff | https://github.com/continue-revolution/sd-webui-animatediff.git | master

and for me it solved and worked

continue-revolution commented 6 months ago

The solution proposed by @BojanRBB is the simplest solution if you don’t know git.

If you do know git, you should do “git checkout forge/master” inside “extensions/sd-webui-animatediff”

iamsuper123 commented 6 months ago

@iamsuper123 I had same problem, here is a solution:

  1. Go to Extrension > Install from URL and install from https://github.com/continue-revolution/sd-forge-animatediff
  2. restart
  3. now you should have this 2 extension installed: sd-forge-animatediff | https://github.com/continue-revolution/sd-forge-animatediff | forge-master sd-webui-animatediff | https://github.com/continue-revolution/sd-webui-animatediff.git | master

and for me it solved and worked

still getting the same "nonetype" issue

iamsuper123 commented 6 months ago

@iamsuper123 I had same problem, here is a solution:

  1. Go to Extrension > Install from URL and install from https://github.com/continue-revolution/sd-forge-animatediff
  2. restart
  3. now you should have this 2 extension installed: sd-forge-animatediff | https://github.com/continue-revolution/sd-forge-animatediff | forge-master sd-webui-animatediff | https://github.com/continue-revolution/sd-webui-animatediff.git | master

and for me it solved and worked

nevermind tysm i unchecked the regular webui animate diff and it worked