Closed cibernicola closed 1 year ago
Do you mean a local stable diffusion checkpoint? If that's the case, I think it goes along with #28 and/or #25 . I think we should try and prioritize this issue soon.
This is now resolved. You can load whatever checkpoint you like, replacing the "CompVis/stable-diffusion-v1-4"
in the readme snippets with your model ID. For example, to load Waifu Diffusion, you would do:
from stable_diffusion_videos import StableDiffusionWalkPipeline
from diffusers.schedulers import LMSDiscreteScheduler
import torch
pipeline = StableDiffusionWalkPipeline.from_pretrained(
"hakurei/waifu-diffusion",
use_auth_token=True,
torch_dtype=torch.float16,
revision="fp16",
scheduler=LMSDiscreteScheduler(
beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear"
)
).to("cuda")
It would be wonderful to have the option to load the checkpoint file locally.