alibaba / animate-anything

Fine-Grained Open Domain Image Animation with Motion Guidance
https://animationai.github.io/AnimateAnything/
MIT License
779 stars 63 forks source link

RuntimeError while running train_svd #20

Closed sam-motamed closed 10 months ago

sam-motamed commented 10 months ago

Thank you for sharing train SVD. I am trying to run it using a single video / caption. I have reduced num_frames in train_svd.yaml file to 2 in order to save VRAM. However, I am getting the following tensor mismatch. Any idea what is going wrong?

python train_svd.py --config example/train_svd.yaml pretrained_m
odel_path=stabilityai/stable-video-diffusion-img2vid-xt
Initializing the conversion map
animation/lib/python3.10/site-packages/accelerate/accelerator.py:371: UserWarning: `log_with=tensorboard` was passed but no supported trackers are currently installed.
  warnings.warn(f"`log_with={log_with}` was passed but no supported trackers are currently installed.")
01/05/2024 13:02:43 - INFO - __main__ - Distributed environment: NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cuda

Mixed precision type: fp16

Loading pipeline components...:   0%|                                                             | 0/5 [00:00<?, ?it/s]Loaded feature_extractor as CLIPImageProcessor from `feature_extractor` subfolder of stabilityai/stable-video-diffusion-img2vid-xt.
Loaded vae as AutoencoderKLTemporalDecoder from `vae` subfolder of stabilityai/stable-video-diffusion-img2vid-xt.
Loading pipeline components...:  40%|█████████████████████▏                               | 2/5 [00:00<00:00, 16.15it/s]Loaded unet as UNetSpatioTemporalConditionModel from `unet` subfolder of stabilityai/stable-video-diffusion-img2vid-xt.
Loaded image_encoder as CLIPVisionModelWithProjection from `image_encoder` subfolder of stabilityai/stable-video-diffusion-img2vid-xt.
Loading pipeline components...:  80%|██████████████████████████████████████████▍          | 4/5 [00:00<00:00,  3.72it/s]Loaded scheduler as EulerDiscreteScheduler from `scheduler` subfolder of stabilityai/stable-video-diffusion-img2vid-xt.
Loading pipeline components...: 100%|█████████████████████████████████████████████████████| 5/5 [00:00<00:00,  5.15it/s]
16 Attention layers using Scaled Dot Product Attention.
01/05/2024 13:02:49 - INFO - __main__ - ***** Running training *****
01/05/2024 13:02:49 - INFO - __main__ -   Num examples = 192
01/05/2024 13:02:49 - INFO - __main__ -   Num Epochs = 53
01/05/2024 13:02:49 - INFO - __main__ -   Instantaneous batch size per device = 1
01/05/2024 13:02:49 - INFO - __main__ -   Total train batch size (w. parallel, distributed & accumulation) = 1
01/05/2024 13:02:49 - INFO - __main__ -   Gradient Accumulation steps = 1
01/05/2024 13:02:49 - INFO - __main__ -   Total optimization steps = 10000
Steps:   0%|                                                                                  | 0/10000 [00:00<?, ?it/s]1428 params have been unfrozen for training.
Traceback (most recent call last):
  File "animate-anything/train_svd.py", line 1006, in <module>
    main(**args_dict)
  File "animate-anything/train_svd.py", line 805, in main
    loss = finetune_unet(pipeline, batch, use_offset_noise, 
  File "animate-anything/train_svd.py", line 503, in finetune_unet
    model_pred = unet(input_latents, c_noise.reshape([bsz]), encoder_hidden_states=encoder_hidden_states, 
  File "animation/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "animation/lib/python3.10/site-packages/accelerate/utils/operations.py", line 581, in forward
    return model_forward(*args, **kwargs)
  File "animation/lib/python3.10/site-packages/accelerate/utils/operations.py", line 569, in __call__
    return convert_to_fp32(self.model_forward(*args, **kwargs))
  File "animation/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 14, in decorate_autocast
    return func(*args, **kwargs)
  File "animation/lib/python3.10/site-packages/diffusers/models/unet_spatio_temporal_condition.py", line 463, in forward
    sample = upsample_block(
  File "animation/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "animation/lib/python3.10/site-packages/diffusers/models/unet_3d_blocks.py", line 2351, in forward
    hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 136 but got size 135 for tensor number 1 in the list.
sam-motamed commented 10 months ago

I guess this is due to the training video size mismatch. Would be great to add an auto-resize for this. I might do :)

JunruiXiao commented 10 months ago

Hi @sam-motamed ! I'm training with webvid2M, and meet same err! Could you please tell me how to fix it?

Traceback (most recent call last): File "/root/paddlejob/workspace/xiaojunrui/SVDLCM/train_svd.py", line 1014, in main(args_dict) File "/root/paddlejob/workspace/xiaojunrui/SVDLCM/train_svd.py", line 815, in main loss = finetune_unet(pipeline, batch, use_offset_noise, File "/root/paddlejob/workspace/xiaojunrui/SVDLCM/train_svd.py", line 504, in finetune_unet model_pred = unet(input_latents, c_noise.reshape([bsz]), encoder_hidden_states=encoder_hidden_states, File "/root/miniconda3/envs/lcmlora/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/root/miniconda3/envs/lcmlora/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "/root/miniconda3/envs/lcmlora/lib/python3.10/site-packages/accelerate/utils/operations.py", line 680, in forward return model_forward(*args, kwargs) File "/root/miniconda3/envs/lcmlora/lib/python3.10/site-packages/accelerate/utils/operations.py", line 668, in call return convert_to_fp32(self.model_forward(*args, *kwargs)) File "/root/miniconda3/envs/lcmlora/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 16, in decorate_autocast return func(args, kwargs) File "/root/miniconda3/envs/lcmlora/lib/python3.10/site-packages/diffusers/models/unet_spatio_temporal_condition.py", line 463, in forward sample = upsample_block( File "/root/miniconda3/envs/lcmlora/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/root/miniconda3/envs/lcmlora/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, **kwargs) File "/root/miniconda3/envs/lcmlora/lib/python3.10/site-packages/diffusers/models/unet_3d_blocks.py", line 2351, in forward hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 12 but got size 11 for tensor number 1 in the list.

sam-motamed commented 10 months ago

Are you using a single video / caption setting? How are you using webvid2M

JunruiXiao commented 10 months ago

Are you using a single video / caption setting? How are you using webvid2M yes, I write a csv file as "video path, caption"

JunruiXiao commented 10 months ago

I have solved this problem! Thanks

inspirelt commented 7 months ago

@JunruiXiao Hi, I met the same 'size mismatch' problem. Could you tell me how you solve it? Thanks.

sculmh commented 6 months ago

@inspirelt What is the resolution of your training data set to? The VAE encoder will downsample the width or height by a factor of 8, so it is recommended to set the resolution to a multiple of 8.

inspirelt commented 6 months ago

Thanks. I resolved it by specify an proper resolution. 

--------------原始邮件-------------- 发件人:"Menghao Li @.>; 发送时间:2024年4月26日(星期五) 下午2:59 收件人:"alibaba/animate-anything" @.>; 抄送:"Tao Lu @.>;"Mention @.>; 主题:Re: [alibaba/animate-anything] RuntimeError while running train_svd (Issue #20)

@inspirelt What is the resolution of your training data set to? The VAE encoder will downsample the width or height by a factor of 8, so it is recommended to set the resolution to a multiple of 8.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>