tumurzakov / AnimateDiff

AnimationDiff with train
Apache License 2.0
115 stars 26 forks source link

TUmurzakov

  1. Train AnimateDiff (24+ frames by multiplying existing module by scale factor and finetune)

    # Multiply pe weights by multiplier for training more than 24 frames
    if motion_module_pe_multiplier > 1:
        for key in motion_module_state_dict:
          if 'pe' in key:
            t = motion_module_state_dict[key]
            t = repeat(t, "b f d -> b (f m) d", m=motion_module_pe_multiplier)
            motion_module_state_dict[key] = t

    I trained till 264 frames on A100

  2. Train AnimateDiff + LoRA/DreamBooth

  3. Infinite infer (credits to dajes) (temporal_context and video_length params).

  4. ControlNet (works with Infinite infer). VRAM consumming. Can only infer 120 frames on single controlnet module on A100

  5. Prompt Walking. Start from Egg and finish with Duck

    {
     0: "Egg",
     10: "Duck",
    }
  6. Updated to last diffusers version

  7. Train LoRA (all layers, sd and mm at once, could be separated if needed)

  8. Region prompter

  9. FreeInit added

AnimateDiff

Open In Colab

This repository is the official implementation of AnimateDiff.

AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning
Yuwei Guo, Ceyuan Yang*, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, Bo Dai

*Corresponding Author

Arxiv Report | Project Page

Todo

Setup for Inference

Prepare Environment

Our approach takes around 60 GB GPU memory to inference. NVIDIA A100 is recommanded.

We updated our inference code with xformers and a sequential decoding trick. Now AnimateDiff takes only ~12GB VRAM to inference, and run on a single RTX3090 !!

git clone https://github.com/guoyww/AnimateDiff.git
cd AnimateDiff

conda env create -f environment.yaml
conda activate animatediff

Download Base T2I & Motion Module Checkpoints

We provide two versions of our Motion Module, which are trained on stable-diffusion-v1-4 and finetuned on v1-5 seperately. It's recommanded to try both of them for best results.

git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 models/StableDiffusion/

bash download_bashscripts/0-MotionModule.sh

You may also directly download the motion module checkpoints from Google Drive, then put them in models/Motion_Module/ folder.

Prepare Personalize T2I

Here we provide inference configs for 6 demo T2I on CivitAI. You may run the following bash scripts to download these checkpoints.

bash download_bashscripts/1-ToonYou.sh
bash download_bashscripts/2-Lyriel.sh
bash download_bashscripts/3-RcnzCartoon.sh
bash download_bashscripts/4-MajicMix.sh
bash download_bashscripts/5-RealisticVision.sh
bash download_bashscripts/6-Tusun.sh
bash download_bashscripts/7-FilmVelvia.sh
bash download_bashscripts/8-GhibliBackground.sh

Inference

After downloading the above peronalized T2I checkpoints, run the following commands to generate animations. The results will automatically be saved to samples/ folder.

python -m scripts.animate --config configs/prompts/1-ToonYou.yaml
python -m scripts.animate --config configs/prompts/2-Lyriel.yaml
python -m scripts.animate --config configs/prompts/3-RcnzCartoon.yaml
python -m scripts.animate --config configs/prompts/4-MajicMix.yaml
python -m scripts.animate --config configs/prompts/5-RealisticVision.yaml
python -m scripts.animate --config configs/prompts/6-Tusun.yaml
python -m scripts.animate --config configs/prompts/7-FilmVelvia.yaml
python -m scripts.animate --config configs/prompts/8-GhibliBackground.yaml

To generate animations with a new DreamBooth/LoRA model, you may create a new config .yaml file in the following format:

NewModel:
  path: "[path to your DreamBooth/LoRA model .safetensors file]"
  base: "[path to LoRA base model .safetensors file, leave it empty string if not needed]"

  motion_module:
    - "models/Motion_Module/mm_sd_v14.ckpt"
    - "models/Motion_Module/mm_sd_v15.ckpt"

  steps:          25
  guidance_scale: 7.5

  prompt:
    - "[positive prompt]"

  n_prompt:
    - "[negative prompt]"

Then run the following commands:

python -m scripts.animate --config [path to the config file]

Gallery

Here we demonstrate several best results we found in our experiments.

Model:ToonYou

Model:Counterfeit V3.0

Model:Realistic Vision V2.0

Model: majicMIX Realistic

Model:RCNZ Cartoon

Model:FilmVelvia

Longer generations

You can also generate longer animations by using overlapping sliding windows.

python -m scripts.animate --config configs/prompts/{your_config}.yaml --L 64 --context_length 16
Sliding window related parameters:

L - the length of the generated animation.

context_length - the length of the sliding window (limited by motion modules capacity), default to L.

context_overlap - how much neighbouring contexts overlap. By default context_length / 2

context_stride - (2^context_stride) is a max stride between 2 neighbour frames. By default 0

Extended this way gallery examples

Model:ToonYou

Model:Realistic Vision V2.0

Community Cases

Here are some samples contributed by the community artists. Create a Pull Request if you would like to show your results here😚.

Character Model:Yoimiya (with an initial reference image, see WIP fork for the extended implementation.)

Character Model:Paimon; Pose Model:Hold Sign

## BibTeX ``` @misc{guo2023animatediff, title={AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning}, author={Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, Bo Dai}, year={2023}, eprint={2307.04725}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## Contact Us **Yuwei Guo**: [guoyuwei@pjlab.org.cn](mailto:guoyuwei@pjlab.org.cn) **Ceyuan Yang**: [yangceyuan@pjlab.org.cn](mailto:yangceyuan@pjlab.org.cn) **Bo Dai**: [daibo@pjlab.org.cn](mailto:daibo@pjlab.org.cn) ## Acknowledgements Codebase built upon [Tune-a-Video](https://github.com/showlab/Tune-A-Video).