ExponentialML / ComfyUI_ModelScopeT2V

Allows native usage of ModelScope based Text To Video Models in ComfyUI
Apache License 2.0
26 stars 4 forks source link

ComfyUI_ModelScopeT2V

image

Allows native usage of ModelScope based Text To Video Models in ComfyUI

Getting Started

Clone The Repository

cd /your/path/to/ComfyUI/custom_nodes
git clone https://github.com/ExponentialML/ComfyUI_ModelScopeT2V.git

Preparation

Create a folder in your ComfyUI models folder named text2video.

Download Models

Models that were converted to A1111 format will work.

Modelscope

https://huggingface.co/kabachuha/modelscope-damo-text2video-pruned-weights/tree/main

Zeroscope

https://huggingface.co/cerspense/zeroscope_v2_1111models

Instructions

Place the models in text2video_pytorch_model.pth model in the text2video directory.

You must also use the accompanying open_clip_pytorch_model.bin, and place it in the clip folder under your model directory.

This is optional if you're not using the attention layers, and are using something like AnimateDiff (more on this in usage).

Usage

Tips

  1. Use the recently released ResAdapter LoRA for better quality at lower resolutions.
  2. If you're using pure ModelScope, try higher CFG (around 15) for better coherence. You may also try any other rescale nodes.
  3. When using pure ModelScope, ensure that you use a minimum of 24 frames.
  4. If using with AnimateDiff, make sure to use 16 frames if you're not using context options.
  5. You must use the same CLIP model as the 1.5 checkpoint if you have enable_attn disabled.

TODO

Atributions

The temporal code was borrowed and leveraged from https://github.com/kabachuha/sd-webui-text2video. Thanks @kabachuha!

Thanks to the ModelScope team for open sourcing. Check out there existing workshttps://github.com/modelscope.