This branch is specifically designed for Stable Diffusion WebUI Forge by lllyasviel. See here for how to install forge and this extension. See Update for current status.
This extension aim for integrating AnimateDiff with CLI into lllyasviel's Forge Adaption of AUTOMATIC1111 Stable Diffusion WebUI and form the most easy-to-use AI video toolkit. You can generate GIFs in exactly the same way as generating images after enabling this extension.
This extension implements AnimateDiff in a different way. It makes heavy use of Unet Patcher, so that you do not need to reload your model weights if you don't want to, and I can almostly get rid of monkey-patching WebUI and ControlNet.
You might also be interested in another extension I created: Segment Anything for Stable Diffusion WebUI. This extension will also be redesigned for forge later..
TusiArt (for users inside P.R.China mainland) and TensorArt (for others) offers online service of this extension.
Update | Future Plan | Model Zoo | Documentation | Tutorial | Thanks | Star History | Sponsor
02/05/2023
: txt2img, prompt travel, infinite generation, all kinds of optimizations have been proven to be working properly and elegantly.02/11/2023
: ControlNet V2V in txt2img panel is working properly and elegantly. You can also try adding mask and inpaint.03/18/2023
: Motion LoRA, i2i batch and GroupNorm hack have been restored. Motion LoRA is built under KohakuBlueleaf's LyCORIS extension. GroupNorm hack is currently in this branch.We believe that all features in OG A1111 version (except IP-Adapter prompt travel / SparseCtrl / ControlNet keyframe / FreeInit) have been available in Forge version. We will synchronize ControlNet updates from OG A1111 version, add SparseCtrl and Magic Animate, and add more paramters as soon as we can.
BREAKING CHANGE:
<lyco:whatever:x.y>
instead of <lora:whatever:x.y>
, not only when you use AnimateDiff) if you install LyCORIS extension in Foge.Although OpenAI Sora is far better at following complex text prompts and generating complex scenes, we believe that OpenAI will NOT open source Sora or any other other products they released recently. My current plan is to continue developing this extension until when an open-sourced video model is released, with strong ability to generate complex scenes, easy customization and good ecosystem like SD1.5.
We will try our best to bring interesting researches into both WebUI and Forge as long as we can. Not all researches will be implemented. You are welcome to submit a feature request if you find an interesting one. We are also open to learn from other equivalent software.
That said, due to the notorious difficulty in maintaining sd-webui-controlnet, we do NOT plan to implement ANY new research into WebUI if it touches "reference control", such as Magic Animate. Such features will be Forge only. Also, some advanced features in ControlNet Forge Intergrated, such as ControlNet per-frame mask, will also be Forge only. I really hope that I could have bandwidth to rework sd-webui-controlnet, but it requires a huge amount of time.
I am maintaining a huggingface repo to provide all official models in fp16 & safetensors format. You are highly recommended to use my link. You MUST use my link to download Motion LoRA, Hotshot-XL, AnimateDiff V3 Motion Adapter, SparseCtrl. You may still use the old links if you want, for all other models
There are a lot of wonderful video tutorials on YouTube and bilibili, and you should check those out for now. For the time being, there are a series of updates on the way and I don't want to work on my own before I am satisfied. An official tutorial should come when I am satisfied with the available features.
We thank all developers and community users who contribute to this repository in many ways, especially
You can sponsor me via WeChat, AliPay or PayPal. You can also support me via ko-fi or afdian.
AliPay | PayPal | |
---|---|---|