nateraw / stable-diffusion-videos

Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
Apache License 2.0
4.36k stars 417 forks source link

Add stable-diffusion-v1-5 #109

Closed fagemx closed 1 year ago

fagemx commented 1 year ago

Using the Stable-Diffusion-v1-5 checkpoint would be great!!!

nateraw commented 1 year ago

It already works :) check the examples folder

nateraw commented 1 year ago

To be clear, this is what I'm using right now:


from stable_diffusion_videos import StableDiffusionWalkPipeline

from diffusers.models import AutoencoderKL
from diffusers.schedulers import LMSDiscreteScheduler
import torch

pipe = StableDiffusionWalkPipeline.from_pretrained(
    'runwayml/stable-diffusion-v1-5',
    vae=AutoencoderKL.from_pretrained(f"stabilityai/sd-vae-ft-ema"),
    torch_dtype=torch.float16,
    revision="fp16",
    safety_checker=None,
    scheduler=LMSDiscreteScheduler(
        beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear"
    )
).to("cuda")

Closing this since it's not an issue.