Warning: as of 2023-11-21 this extension is not maintained. If you'd like to continue devving/remaking it, please contact me on Discord @kabachuha (you can also find me on camenduru's server's text2video channel) and we'll figure it out
Maintained starting on 2023-11-21 by Deforum-art
Maintained by me again
Auto1111 extension implementing various text2video models, such as ModelScope and VideoCrafter, using only Auto1111 webui dependencies and downloadable models (so no logins required anywhere)
6 GBs vram should be enough to run on GPU with low vram vae on at 256x256 (and we are already getting reports of people launching 192x192 videos with 4gbs of vram). 24 frames long 256x256 video definitely fits into 12gbs of NVIDIA GeForce RTX 2080 Ti, or if you have a Torch2 attention optimization supported videocard, you can fit the whopping 125 frames (8 seconds) long video into the same 12 GBs of VRAM! 250 frames (16 seconds) in the same conditions take 20 gbs.
Prompt: best quality, anime girl dancing
We will appreciate any help with this extension, especially pull-requests.
Currently, there is support for trained LoRAs using this finetune repository. Please follow instructions there on how to train them. https://github.com/ExponentialML/Text-To-Video-Finetuning#updates
After training, simply place them into your default LoRA directory defined by your webui installation.
VideoCrafter runs with around 9.2 GBs of VRAM with the settings set on Default.
Update 2023-03-27: VAE settings and "Keep model in VRAM" moved to general webui setting under 'ModelScopeTxt2Vid' section.
Update 2023-03-26: prompt weights implemented! (ModelScope only yet, as of 2023-04-05)
Update 2023-04-05: added VideoCrafter support, renamed the extension to plainly 'sd-webui-text2video'
Update 2023-04-13: in-framing/in-painting support: allows to 'animate' an existing pic or even seamlessly loop the videos!
Update 2023-04-15: MEGA-UPDATE: Torch2/xformers optimizations, possible to make 125 frames long video on 12 gbs of VRAM. CPU offloading doesn't happen now if keep_pipe_in_vram is checked.
Update 2023-04-16: WebAPI is available!
Update 2023-07-02: Alternate samplers, model hotswitch.
Prompt: cinematic explosion by greg rutkowski
Prompt: really attractive anime girl skating, by makoto shinkai, cinematic lighting
'Continuing' an existing image
Prompt: best quality, astronaut dog
Prompt: explosion
In-painting and looping back the videos
Prompt: nuclear explosion
Prompt: best quality, lots of cheese
Prompt: anime 1girl reimu touhou
Download the following files from the original HuggingFace repository. Alternatively, download half-precision fp16 pruned weights (they are smaller and use less vram on loading):
And put them in stable-diffusion-webui/models/ModelScope/t2v
. Create those 2 folders if they are missing.
Download pretrained T2V models either via this link or download the pruned half precision weights, and put the model.ckpt
in models/VideoCrafter/model.ckpt
.
Thanks to https://github.com/ExponentialML/Text-To-Video-Finetuning you can fine-tune your models!
To utilize a fine-tuned model here, use this script which will convert the Diffusers-formatted model that repo outputs into the original weights format.
ZeroScope v2
Trained by @cerspense on high quality YouTube videos. Download the files from the folder named zs2_XL
at cerspense/zeroscope_v2_XL and then add the missing VQGAN_autoencoder.pth
and configuration.json
from any other ModelScope model.
Potat1
Potat1 is a ModelScope-based model trained by @camenduru on 2197 clips with the resolution of 1024x576 which makes it the first open source hi-res text2video model.
To download the plug-and-play weights for the extension use this link https://huggingface.co/kabachuha/potat1-with-text-encoder-original-format.
Animov-0.1
Animov-0.1 by strangeman3107. The converted weights for this model reside here.
txt2vid with img2vid
vid2vid
HuggingFace space:
https://huggingface.co/spaces/damo-vilab/modelscope-text-to-video-synthesis
The model PyTorch implementation from ModelScope:
https://github.com/modelscope/modelscope/tree/master/modelscope/models/multi_modal/video_synthesis
Google Colab from the devs:
https://colab.research.google.com/drive/1uW1ZqswkQ9Z9bp5Nbo5z59cAn7I0hE6R?usp=sharing
Github: