showlab / MotionDirector

[ECCV 2024 Oral] MotionDirector: Motion Customization of Text-to-Video Diffusion Models.
https://showlab.github.io/MotionDirector/
Apache License 2.0
850 stars 54 forks source link

why the inference results are not aligned with the validation results? #18

Closed GFENGG closed 3 months ago

GFENGG commented 9 months ago

Hello, i used the weights saved after the training step to inference, but the results are not aligned with the results generated from the last validation step in training. So what is the reason for this phenomenon?

ruizhaocv commented 9 months ago

Hi. Typically, this is because the randomly sampled noises are different from the training stage. Could you please provide more details? Like are you training MotionDirector on a single video or multiple videos?

GFENGG commented 9 months ago

Hi. Typically, this is because the randomly sampled noises are different from the training stage. Could you please provide more details? Like are you training MotionDirector on a single video or multiple videos?

Thanks for your reply, your answer helps, it is maybe different settings between training and inference. I have another question, how is the generalization ability of MotionDirector? For example, if i use a custom dreambooth weight, which is different from training. Can the temporal weights trained from MotionDirector work under this situation?

ruizhaocv commented 9 months ago

If you use DreamBooth to only finetune the spatial layers, I think it is OK, just like the results shown here. If the temporal layers are also changed, I'm not sure what will happen. You can try it out. Looking forward to your insights.