nateraw / stable-diffusion-videos

Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
Apache License 2.0
4.46k stars 423 forks source link

Add FAQs to README + Common example snippets #146

Open nateraw opened 1 year ago

nateraw commented 1 year ago

Keep getting same questions about things, should just put these in readme:

Feel free to add/suggest more here if you are creeping on this issue and its still open. I'll make this PR asap

jtoy commented 1 year ago

would love to see common numbers to use as well, I ran a couple of experiments and so far the videos don't come out as good as the examples.

quintendewilde commented 1 year ago

Hi is it possible to show some examples on how to match the seeds.

I've managed to start the image part, but got errors when trying to create the video. So I have several folders on G drive with images, but no idea where to add those seeds to restart the video generation from those images.

This is the error I get when trying to generate the movie. Maybe there is something else I'm doing wrong.

`Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/gradio/routes.py", line 374, in run_predict
    output = await app.get_blocks().process_api(
  File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1017, in process_api
    result = await self.call_function(
  File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 835, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/usr/local/lib/python3.8/dist-packages/anyio/to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "/usr/local/lib/python3.8/dist-packages/stable_diffusion_videos/app.py", line 91, in fn_videos
    return self.pipeline.walk(**kwargs)
  File "/usr/local/lib/python3.8/dist-packages/stable_diffusion_videos/stable_diffusion_pipeline.py", line 878, in walk
    return make_video_pyav(
  File "/usr/local/lib/python3.8/dist-packages/stable_diffusion_videos/stable_diffusion_pipeline.py", line 123, in make_video_pyav
    frames = frames.permute(0, 2, 3, 1)
AttributeError: 'NoneType' object has no attribute 'permute'`
nateraw commented 1 year ago

Examples. note to myself. add these to readme with accordion

SD 1.4

from stable_diffusion_videos import StableDiffusionWalkPipeline
import torch

device = "mps" if torch.backends.mps.is_available() else "cuda" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if device == "cuda" else torch.float32
pipe = StableDiffusionWalkPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4",
    torch_dtype=torch_dtype,
    safety_checker=None
).to(device)

pipe.walk(
    prompts=['a cat', 'a dog'],
    seeds=[1234, 4321],
    num_interpolation_steps=5,
    num_inference_steps=30,
    fps=5
)

SD 1.5

from stable_diffusion_videos import StableDiffusionWalkPipeline
import torch

device = "mps" if torch.backends.mps.is_available() else "cuda" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if device == "cuda" else torch.float32
pipe = StableDiffusionWalkPipeline.from_pretrained(
    "runwayml/stable-diffusion-v1-5",
    torch_dtype=torch_dtype,
    safety_checker=None,
).to(device)

pipe.walk(
    prompts=['a cat', 'a dog'],
    seeds=[1234, 4321],
    num_interpolation_steps=5,
    num_inference_steps=30,
    fps=5
)

SD 2.1

import torch

from stable_diffusion_videos import StableDiffusionWalkPipeline
from diffusers import DPMSolverMultistepScheduler

device = "mps" if torch.backends.mps.is_available() else "cuda" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if device == "cuda" else torch.float32
pipe = StableDiffusionWalkPipeline.from_pretrained(
    "stabilityai/stable-diffusion-2-1",
    torch_dtype=torch_dtype,
).to(device)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)

pipe.walk(
    prompts=['a cat', 'a dog'],
    seeds=[1234, 4321],
    num_interpolation_steps=5,
    num_inference_steps=50,
    fps=5,
)