G-U-N / Motion-I2V

[SIGGRAPH 2024] Motion I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling
https://xiaoyushi97.github.io/Motion-I2V/
81 stars 7 forks source link

Is there any plan to release the checkpoint or the pretrain checkpoint #3

Open trouble-maker007 opened 1 month ago

trouble-maker007 commented 1 month ago

pretrained checkpoint :

output_dir: "outputs_ctrl_flow_gen" pretrained_model_path: "models/stage1/StableDiffusion-FlowGen/" vae_pretrained_path: "models/stage1/StableDiffusion-FlowGen/vae_flow/diffusion_pytorch_model.bin" resumed_model_path: "models/stage1/StableDiffusion-FlowGen/unet/diffusion_pytorch_model.bin"

G-U-N commented 1 month ago

Please check https://huggingface.co/wangfuyun/Motion-I2V.

trouble-maker007 commented 1 month ago

@G-U-N thanks for the quick response, and does checkpoint support for more frame finetune and generate longer video such as 6s?

TomSuen commented 1 month ago

Please check https://huggingface.co/wangfuyun/Motion-I2V.

Hi, I want to know https://github.com/G-U-N/Motion-I2V/blob/55a4d02190a60f0695c3979d637a09ae4fee2609/scripts/app.py#L408

Could thepersonalized_unet_path be changed to any other unet that trained from sd1.5?