Closed omidreza-amrollahi closed 1 year ago
I was able to load it like this (using latest version of this library, 0.7.1)
from stable_diffusion_videos import StableDiffusionWalkPipeline
from diffusers import EulerDiscreteScheduler
import torch
model_id = "stabilityai/stable-diffusion-2-base"
# Use the Euler scheduler here instead
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionWalkPipeline.from_pretrained(
model_id,
scheduler=scheduler,
feature_extractor=None,
safety_checker=None,
revision="fp16",
torch_dtype=torch.float16,
).to("cuda")
works! thank you very much. Is there also a way to create videos between two custom images? (rather than images generated by stable diffusion)
Nope it only works for images generated from the model. You can read a detailed explanation of the logic here in the music videos post to understand why that is.
Hi, I get an error as below when I try to use stable diffusion 2:
/usr/local/lib/python3.8/dist-packages/diffusers/pipeline_utils.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 672 elif len(missing_modules) > 0: 673 passed_modules = set(list(init_kwargs.keys()) + list(passed_class_obj.keys())) - optional_kwargs --> 674 raise ValueError( 675 f"Pipeline {pipeline_class} expected {expected_modules}, but only {passed_modules} were passed." 676 )
ValueError: Pipeline <class 'stable_diffusion_videos.stable_diffusion_pipeline.StableDiffusionWalkPipeline'> expected {'safety_checker', 'vae', 'tokenizer', 'unet', 'text_encoder', 'feature_extractor', 'scheduler'}, but only {'vae', 'tokenizer', 'unet', 'text_encoder', 'scheduler'} were passed.
any ideas how it can be solved? Thanks