guoyww / AnimateDiff

Official implementation of AnimateDiff.
https://animatediff.github.io
Apache License 2.0
10.21k stars 832 forks source link

Out of Memory Errors / "AttributeError: 'AnimateDiffProcess' object has no attribute 'text_cond'" or "'multi_cond'" when running with ControlNet V2V #296

Open paulhewson2022 opened 6 months ago

paulhewson2022 commented 6 months ago

I am trying to run AnimateDiff with ControlNet V2V. Both ControlNet and AnimateDiff work fine separately. I am following these instructions almost exactly, save for making the prompt slightly more SFW (scroll down to "Video to Video Using ControlNet’s ‘Video Source’ Upload"). In summary, the guide provides an example MP4 file that is a 113 frame/30 fps/1MB video with very little movement, and has the user load that into ControlNet with the OpenPose model.

As background, I'm running A1111 on Windows 11 with an RTX 4060 Ti 16GB. A1111 v1.8.0 python: 3.10.13 torch: 2.2.1+cu118 xformers: 0.0.24+cu118 gradio: 3.41.2

My image previews are off, and my prompts are padded.

When I try generating a GIF using the guide's suggested resolution of 768x768, I get the below output, ending with an out of memory error. OK, maybe 768x768 is too much for a 113 frame GIF even with 16GB VRAM, but I did notice an 'ffmpeg' error, a weird "ValueError: controlnet is enabled but no input image is given" error (I didn't think I was supposed to drop anything into the Controlnet interface for V2V), and a "AttributeError: 'NoneType' object has no attribute 'multi_cond'" error.

---
2024-03-04 12:02:53,491 - AnimateDiff - INFO - AnimateDiff process start.
2024-03-04 12:02:53,491 - AnimateDiff - INFO - Motion module already injected. Trying to restore.
2024-03-04 12:02:53,492 - AnimateDiff - INFO - Restoring DDIM alpha.
2024-03-04 12:02:53,492 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet input blocks.
2024-03-04 12:02:53,492 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet output blocks.
2024-03-04 12:02:53,493 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet middle block.
2024-03-04 12:02:53,493 - AnimateDiff - INFO - Removal finished.
2024-03-04 12:02:53,502 - AnimateDiff - INFO - Injecting motion module mm_sd15_v2.safetensors into SD1.5 UNet middle block.
2024-03-04 12:02:53,502 - AnimateDiff - INFO - Injecting motion module mm_sd15_v2.safetensors into SD1.5 UNet input blocks.
2024-03-04 12:02:53,503 - AnimateDiff - INFO - Injecting motion module mm_sd15_v2.safetensors into SD1.5 UNet output blocks.
2024-03-04 12:02:53,503 - AnimateDiff - INFO - Setting DDIM alpha.
2024-03-04 12:02:53,506 - AnimateDiff - INFO - Injection finished.
2024-03-04 12:02:53,507 - AnimateDiff - ERROR - [AnimateDiff] Error extracting frames via ffmpeg: [WinError 123] The filename, directory name, or volume label syntax is incorrect: 'GIFC:\\Users\\ernie\\AppData\\Local\\Temp\\gradio\\185e036dbc445c446fae79292baaa6316036fd21\\gil512.mp4-82824f3b', fall back to OpenCV.
2024-03-04 12:02:53,507 - AnimateDiff - INFO - Attempting to extract frames via OpenCV from C:\Users\ernie\AppData\Local\Temp\gradio\185e036dbc445c446fae79292baaa6316036fd21\gil512.mp4 to GIFC:\Users\ernie\AppData\Local\Temp\gradio\185e036dbc445c446fae79292baaa6316036fd21\gil512.mp4-82824f3b
*** Error running before_process: C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py
    Traceback (most recent call last):
      File "C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_utils.py", line 102, in extract_frames_from_video
        ffmpeg_extract_frames(params.video_source, params.video_path)
      File "C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_utils.py", line 73, in ffmpeg_extract_frames
        tmp_frame_dir.mkdir(parents=True, exist_ok=True)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\pathlib.py", line 1175, in mkdir
        self._accessor.mkdir(self, mode)
    OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: 'GIFC:\\Users\\ernie\\AppData\\Local\\Temp\\gradio\\185e036dbc445c446fae79292baaa6316036fd21\\gil512.mp4-82824f3b'

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
      File "C:\Users\ernie\stable-diffusion-webui\modules\scripts.py", line 776, in before_process
        script.before_process(p, *script_args)
      File "C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 64, in before_process
        params.set_p(p)
      File "C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_ui.py", line 169, in set_p
        extract_frames_from_video(self)
      File "C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_utils.py", line 105, in extract_frames_from_video
        cv2_extract_frames(params.video_source, params.video_path)
      File "C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_utils.py", line 84, in cv2_extract_frames
        tmp_frame_dir.mkdir(parents=True, exist_ok=True)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\pathlib.py", line 1175, in mkdir
        self._accessor.mkdir(self, mode)
    OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: 'GIFC:\\Users\\ernie\\AppData\\Local\\Temp\\gradio\\185e036dbc445c446fae79292baaa6316036fd21\\gil512.mp4-82824f3b'

---
2024-03-04 12:02:53,537 - ControlNet - INFO - unit_separate = False, style_align = False
2024-03-04 12:02:53,537 - ControlNet - INFO - Loading model from cache: control_v11p_sd15_openpose [cab727d4]
*** Error running process: C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py
    Traceback (most recent call last):
      File "C:\Users\ernie\stable-diffusion-webui\modules\scripts.py", line 784, in process
        script.process(p, *script_args)
      File "C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 1275, in process
        self.controlnet_hack(p)
      File "C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 1260, in controlnet_hack
        self.controlnet_main_entry(p)
      File "C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 949, in controlnet_main_entry
        input_image, resize_mode = Script.choose_input_image(p, unit, idx)
      File "C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 726, in choose_input_image
        raise ValueError("controlnet is enabled but no input image is given")
    ValueError: controlnet is enabled but no input image is given

---
  0%|                                                                                                                                                                                                                                | 0/20 [00:00<?, ?it/s]*** Error executing callback cfg_denoiser_callback for C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py
    Traceback (most recent call last):
      File "C:\Users\ernie\stable-diffusion-webui\modules\script_callbacks.py", line 230, in cfg_denoiser_callback
        c.callback(params)
      File "C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 90, in animatediff_on_cfg_denoiser
        ad_params.text_cond = ad_params.prompt_scheduler.multi_cond(cfg_params.text_cond, prompt_closed_loop)
    AttributeError: 'NoneType' object has no attribute 'multi_cond'

---
  0%|                                                                                                                                                                                                                                | 0/20 [00:03<?, ?it/s]
*** Error completing request
*** Arguments: ('task(x7yeo6ztcmh8ojz)', <gradio.routes.Request object at 0x000002AC4C9A5B70>, '(best quality, masterpiece, 1girl, t-shrt, jeans, highest detailed)full body photo, ultra detailed, (textured_clothing), black_background, (intricate details, hyperdetailed:1.15), detailed, (official art, extreme detailed, highest detailed), ', 'EasyNegative, bad-hands-5  ', [], 20, 'DPM++ 2M Karras', 1, 1, 6, 768, 768, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, 862816124, False, -1, 0, 0, 0, False, 'CodeFormer', False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, <scripts.animatediff_ui.AnimateDiffProcess object at 0x000002AC4C9A4520>, UiControlNetUnit(enabled=True, module='openpose_full', model='control_v11p_sd15_openpose [cab727d4]', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, 1, 1, '', 1, 1, ['After Upscaling/Before Restore Face'], 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, False, '0', 'C:\\Users\\ernie\\stable-diffusion-webui\\models\\roop\\inswapper_128.onnx', 'CodeFormer', 1, '', 1, 1, False, True, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
    Traceback (most recent call last):
      File "C:\Users\ernie\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\ernie\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\Users\ernie\stable-diffusion-webui\modules\txt2img.py", line 110, in txt2img
        processed = processing.process_images(p)
      File "C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-prompt-history\lib_history\image_process_hijacker.py", line 15, in process_images
        return original_function(p)
      File "C:\Users\ernie\stable-diffusion-webui\modules\processing.py", line 785, in process_images
        res = process_images_inner(p)
      File "C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 48, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "C:\Users\ernie\stable-diffusion-webui\modules\processing.py", line 921, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Users\ernie\stable-diffusion-webui\modules\processing.py", line 1257, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "C:\Users\ernie\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 234, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\ernie\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "C:\Users\ernie\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 234, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\ernie\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\ernie\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 237, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\ernie\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "C:\Users\ernie\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "C:\Users\ernie\stable-diffusion-webui\modules\sd_hijack_utils.py", line 18, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "C:\Users\ernie\stable-diffusion-webui\modules\sd_hijack_utils.py", line 32, in __call__
        return self.__orig_func(*args, **kwargs)
      File "C:\Users\ernie\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\ernie\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\ernie\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
        return original_forward(self, x, timesteps, context, *args, **kwargs)
      File "C:\Users\ernie\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
        h = module(h, emb, context)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\ernie\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
        x = layer(x, context)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\ernie\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
        x = block(x, context=context[i])
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\ernie\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
        return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
      File "C:\Users\ernie\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
        return CheckpointFunction.apply(func, len(inputs), *args)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\autograd\function.py", line 553, in apply
        return super().apply(*args, **kwargs)  # type: ignore[misc]
      File "C:\Users\ernie\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
        output_tensors = ctx.run_function(*ctx.input_tensors)
      File "C:\Users\ernie\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward
        x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\ernie\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 501, in xformers_attention_forward
        return self.to_out(out)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
        input = module(input)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\ernie\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 500, in network_Linear_forward
        return originals.Linear_forward(self, input)
      File "C:\Users\ernie\anaconda3\envs\autosd5\lib\site-packages\torch\nn\modules\linear.py", line 116, in forward
        return F.linear(input, self.weight, self.bias)
    torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.24 GiB. GPU 0 has a total capacity of 16.00 GiB of which 0 bytes is free. Of the allocated memory 14.52 GiB is allocated by PyTorch, and 210.67 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

---

When I try reducing the resolution to 320x320, I get the same controlnet and AttributeErrors I mentioned above, but it then ends with " Shape mismatch, can't divide axis of length 226 in chunks of 16" (recall that this is a 113 frame video).

Finally, when I try reducing the frames to 112, it runs, but after every step I get the below error message before it continues. And then it ends at 100% with the bottom error message. It generates the frames but does not combine them into a GIF (again, on its own AnimateDiff does this fine), and the frames do not appear to be adhering to the poses in the video.

  5%|██████████▊                                                                                                                                                                                                             | 1/20 [00:07<02:13,  7.03s/it]*** Error executing callback cfg_denoiser_callback for C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py                                                                                   | 0/20 [00:00<?, ?it/s]
    Traceback (most recent call last):
      File "C:\Users\ernie\stable-diffusion-webui\modules\script_callbacks.py", line 230, in cfg_denoiser_callback
        c.callback(params)
      File "C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 163, in animatediff_on_cfg_denoiser
        cfg_params.text_cond = ad_params.text_cond
    AttributeError: 'AnimateDiffProcess' object has no attribute 'text_cond'

---
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [01:47<00:00,  5.37s/it]
*** Error running postprocess_batch_list: C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py██████████████████████████████████████████████████████████████████████████████████████| 20/20 [01:40<00:00,  5.19s/it]
    Traceback (most recent call last):
      File "C:\Users\ernie\stable-diffusion-webui\modules\scripts.py", line 832, in postprocess_batch_list
        script.postprocess_batch_list(p, pp, *script_args, **kwargs)
      File "C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 80, in postprocess_batch_list
        params.prompt_scheduler.save_infotext_img(p)
    AttributeError: 'NoneType' object has no attribute 'save_infotext_img'

---
*** Error running postprocess: C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py
    Traceback (most recent call last):
      File "C:\Users\ernie\stable-diffusion-webui\modules\scripts.py", line 816, in postprocess
        script.postprocess(p, processed, *script_args)
      File "C:\Users\ernie\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 90, in postprocess
        params.prompt_scheduler.save_infotext_txt(res)
    AttributeError: 'NoneType' object has no attribute 'save_infotext_txt'
12441409 commented 5 months ago

me too

fanchunmeng-98 commented 4 months ago

me too