However in Paper, there is no unet finetuning process
only Lora training for domain adoptation.
Can you check is what is unet image layer finetuning means??
I have the same question. The paper says only a lora is needed for domain-adaptation, but in this project the whole unet is updated during the image_finetune stage
Hi, I want to follow training AnimateDiff. In the tutorial (https://github.com/guoyww/AnimateDiff/blob/main/__assets__/docs/animatediff.md) first I finetune with configs/training/v1/image_finetune.yaml config.
However in Paper, there is no unet finetuning process only Lora training for domain adoptation. Can you check is what is unet image layer finetuning means??