This repository is the official implementation of MotionClone. It is a training-free framework that enables motion cloning from a reference video for controllable video generation, without cumbersome video inversion processes.
MotionClone: Training-Free Motion Cloning for Controllable Video Generation
Pengyang Ling*,
Jiazi Bu*,
Pan Zhangβ ,
Xiaoyi Dong,
Yuhang Zang,
Tong Wu,
Huaian Chen,
Jiaqi Wang,
Yi Jinβ
(*Equal Contribution)(β Corresponding Author)
We show more results in the Project Page.
MotionClone utilizes sparse temporal attention weights as motion representations for motion guidance, facilitating diverse motion transfer across varying scenarios. Meanwhile, MotionClone allows for the direct extraction of motion representation through a single denoising step, bypassing the cumbersome inversion processes and thus promoting both efficiency and flexibility.
git clone https://github.com/Bujiazi/MotionClone.git
cd MotionClone
conda env create -f environment.yaml
conda activate motionclone
git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 models/StableDiffusion/
After downloading Stable Diffusion, save them to models/StableDiffusion
.
Manually download the community .safetensors
models from RealisticVision V5.1 and save them to models/DreamBooth_LoRA
.
Manually download the AnimateDiff modules from AnimateDiff, we recommend v3_adapter_sd_v15.ckpt
and v3_sd15_mm.ckpt.ckpt
. Save the modules to models/Motion_Module
.
Manually download "v3_sd15_sparsectrl_rgb.ckpt" and "v3_sd15_sparsectrl_scribble.ckpt" from AnimateDiff. Save the modules to models/SparseCtrl
.
python t2v_video_sample.py --inference_config "configs/t2v_camera.yaml" --examples "configs/t2v_camera.jsonl"
python t2v_video_sample.py --inference_config "configs/t2v_object.yaml" --examples "configs/t2v_object.jsonl"
python i2v_video_sample.py --inference_config "configs/i2v_sketch.yaml" --examples "configs/i2v_sketch.jsonl"
python i2v_video_sample.py --inference_config "configs/i2v_rgb.yaml" --examples "configs/i2v_rgb.jsonl"
If you find this work helpful, please cite the following paper:
@article{ling2024motionclone,
title={MotionClone: Training-Free Motion Cloning for Controllable Video Generation},
author={Ling, Pengyang and Bu, Jiazi and Zhang, Pan and Dong, Xiaoyi and Zang, Yuhang and Wu, Tong and Chen, Huaian and Wang, Jiaqi and Jin, Yi},
journal={arXiv preprint arXiv:2406.05338},
year={2024}
}
This is official code of MotionClone. All the copyrights of the demo images and audio are from community users. Feel free to contact us if you would like remove them.
The code is built upon the below repositories, we thank all the contributors for open-sourcing.