G-U-N / AnimateLCM

[SIGGRAPH ASIA 2024 TCS] AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data
https://animatelcm.github.io
MIT License
591 stars 42 forks source link

size issue with AnimateLCM I2V model #35

Open dreamyou070 opened 1 month ago

dreamyou070 commented 1 month ago

when inference AnimateLCM I2V, the recommended size is (768,512) However, it is not possible to inference on A100 GPU. In the paper, you trained on A800 GPU.. is there any way of reducing the size and preserving the quality? (I cannot use once..)

G-U-N commented 1 month ago

Do you mean you face GPU memory overflow? This is not a normal case. In my testing, you won't need over 20 GB for inference. If you use pytorch<2.0, please make sure that xformers is properly installed, which greatly reduces the GPU memory need. Additionally, you can set up enable_vae_slicing to reduce the GPU memory needed for decoding.

dreamyou070 commented 1 month ago

Thanks so much! I success inference 768 resolution image to video. Thanks so much!!

2024년 9월 3일 (화) 오전 11:52, Fu-Yun Wang @.***>님이 작성:

Do you mean you face GPU memory overflow? This is not a normal case. In my testing, you won't need over 20 GB for inference. If you use pytorch<2.0, please make sure that xformers is properly installed, which greatly reduces the GPU memory need. Additionally, you can set up enable_vae_slicing to reduce the GPU memory needed for decoding.

— Reply to this email directly, view it on GitHub https://github.com/G-U-N/AnimateLCM/issues/35#issuecomment-2325515132, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQKBRW5HOS26HENXQEE4NHLZUUP7NAVCNFSM6AAAAABNRF4K62VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRVGUYTKMJTGI . You are receiving this because you authored the thread.Message ID: @.***>