Open Vincent-luo opened 2 months ago
Now i training lora 1024x576x3 and it tooks 23.8 GB on my 3090.
Thanks for the suggestions! I'll give them a try. I've noticed that the official AnimateDiff code doesn't use gradient checkpointing by default, and it can save lots of GPU memory.
Yes, i'm using checkpointing too
Hello, I noticed that you're able to train on more than 300 frames using an A100 GPU. I'm curious about your training process - are you only training the
to_q
or the entire motion module?I've been using the official AnimateDiff training script, and training on just 32 frames consumes about 30GB of VRAM. I'm wondering if you've implemented any optimizations to improve efficiency. It would be helpful if you could share some details about your training setup and any techniques you're using. Thanks!