ali-vilab / UniAnimate

Code for Paper "UniAnimate: Taming Unified Video Diffusion Models for Consistent Human Image Animation".
https://unianimate.github.io/
1.03k stars 60 forks source link

Fine-tuning of v2-1_512-ema-pruned.ckpt #75

Open caslix opened 1 week ago

caslix commented 1 week ago

Hello! I did a fine-tuning of v2-1_512-ema-pruned.ckpt, but it did not affect the result in any way. I used ready-made versions of models with civit.

Tell me, please, what could be wrong? Maybe there are some special features? Thanks!

Ahmed-Ezzat20 commented 5 days ago

Hello @caslix

as I am trying to proceed with the fine-tuning process, can you share with me what have you achieved or if we can collaborate together about this, I am working to collect more data and still haven't the training script yet.

caslix commented 4 days ago

Привет@caslix

Поскольку я пытаюсь продолжить процесс тонкой настройки, не могли бы вы поделиться со мной своими достижениями или мы могли бы поработать над этим вместе? Я работаю над сбором дополнительных данных и пока еще не имею сценария обучения.

Greetings! The problem is that I didn't get any positive or negative results after fine-tuning of the checkpoint for Stable Diffusion.

Ahmed-Ezzat20 commented 4 days ago

what was your fine tuning configuration for the data and the model? @caslix