showlab / MotionDirector

[ECCV 2024 Oral] MotionDirector: Motion Customization of Text-to-Video Diffusion Models.
https://showlab.github.io/MotionDirector/
Apache License 2.0
808 stars 48 forks source link

Question about Spatial loss #44

Open DanahYatim opened 4 weeks ago

DanahYatim commented 4 weeks ago

Hi, Thank you so much for sharing your amazing work!!!

In the paper it is mentioned that the spatial LoRAs are trained on a single frame randomly sampled from the training video to fit its appearance while ignoring its motion, based on spatial loss, which is reformulated as - image

I wanted to ask about the reasoning of choosing to pass a single frame to the 3D U-Net vs. just passing all frames for training the spatial LoRAs? If we take into account the fact that the pretrained T2V model was trained on videos, why does it make sense to pass a single frame for this loss? is the model capable of generating a single frame?

Thanks

ruizhaocv commented 3 weeks ago

Hi. The spatial LoRAs are just injected into spatial layers, which are independent of the number of frames.