MrForExample / ComfyUI-AnimateAnyone-Evolved

Improved AnimateAnyone implementation that allows you to use the opse image sequence and reference image to generate stylized video
MIT License
501 stars 43 forks source link

ComfyUI-AnimateAnyone-Evolved

Improved AnimateAnyone implementation that allows you to use the opse image sequence and reference image to generate stylized video.
The current goal of this project is to achieve desired pose2video result with 1+FPS on GPUs that are equal to or better than RTX 3080!🚀

Currently Support

Roadmap

Install (You can also use ComfyUI Manager)

  1. Clone this repo into the Your ComfyUI root directory\ComfyUI\custom_nodes\ and install dependent Python packages:

    cd Your_ComfyUI_root_directory\ComfyUI\custom_nodes\
    
    git clone https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved.git
    
    pip install -r requirements.txt
    
    # If you got error regards diffusers then run:
    pip install --force-reinstall diffusers>=0.26.1
  2. Download pre-trained models:
    • stable-diffusion-v1-5_unet
    • Moore-AnimateAnyone Pre-trained Models
    • Above models need to be put under folder pretrained_weights as follow:
      ./pretrained_weights/
      |-- denoising_unet.pth
      |-- motion_module.pth
      |-- pose_guider.pth
      |-- reference_unet.pth
      `-- stable-diffusion-v1-5
      |-- feature_extractor
      |   `-- preprocessor_config.json
      |-- model_index.json
      |-- unet
      |   |-- config.json
      |   `-- diffusion_pytorch_model.bin
      `-- v1-inference.yaml
    • Download clip image encoder (e.g. sd-image-variations-diffusers ) and put it under Your_ComfyUI_root_directory\ComfyUI\models\clip_vision
    • Download vae (e.g. sd-vae-ft-mse) and put it under Your_ComfyUI_root_directory\ComfyUI\models\vae