huggingface / diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
https://huggingface.co/docs/diffusers
Apache License 2.0
23.94k stars 4.93k forks source link

Img2Img for Video Generation #1594

Open MinSong2 opened 1 year ago

MinSong2 commented 1 year ago

Model/Pipeline/Scheduler description

The proposed new community pipeline is stemmed from interpolate_stable_diffusion.py. Unlike interpolate_stable_diffusion, the proposed pipeline is interpolating between an initial image supplied by the user and an image generated by the user prompt.

At the following github site, https://github.com/OnomaAI/noci_diffusers/tree/main/img2video, I provided the code snippet for how to use the custom pipeline.

Open source status

Provide useful links for the implementation

https://github.com/OnomaAI/noci_diffusers/blob/main/img2video/img2img_video_stable_diffusion.py

patrickvonplaten commented 1 year ago

Hey @MinSong2,

Thanks for opening the issue. Note that we need model weights to be available in order to add an architecture.

MinSong2 commented 1 year ago

Hello @patrickvonplaten,

Thank you for your reply! I used stabilityai/stable-diffusion-2 as model weights, which means that I did not update the weights. I might have misunderstood the "Open Source Status" option "The model weights are available (Only relevant if addition is not a scheduler)."! I checked the checkbox for the option!

Many thanks! Min

kadirnar commented 1 year ago

Are you going to add it to the diffusers library? @MinSong2

I'm also working on producing videos using the img2img pipeline. https://github.com/kadirnar/Custom-Diffusion