aigc-apps / EasyAnimate

📺 An End-to-End Solution for High-Resolution and Long Video Generation Based on Transformer Diffusion
Apache License 2.0
1.21k stars 92 forks source link

Great work! How many GPU resources used to train the model? #36

Open gxd1994 opened 3 months ago

yunkchen commented 3 months ago

Our training consists of two stages, and each stage utilized approximately more than 200 A800 GPUs, training for 24*5 hours; based on our pre-trained model, performing fine-tuning with LoRA only requires an A10 or a GPU with equivalent memory performance for 2 hours to achieve satisfactory results.