guoqincode / Open-AnimateAnyone

Unofficial Implementation of Animate Anyone
2.89k stars 233 forks source link

about training memory optimization #42

Closed zhangvia closed 8 months ago

zhangvia commented 8 months ago

In the README, you mentioned that you would optimize the training code using DeepSpeed and Accelerate. However, as far as I know, the DeepSpeed functionality integrated into the Accelerate library does not support multi-model training. Do you have any suggestions about use deepspeed to optimize the memory?

guoqincode commented 8 months ago

You can try wrapping models together.