nateraw / stable-diffusion-videos

Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
Apache License 2.0
4.4k stars 421 forks source link

Torch complaining on M1 machine #79

Closed enzyme69 closed 1 year ago

enzyme69 commented 1 year ago

I kept getting this error when running it on M1 machine with 8GB RAM: Torch not compiled with CUDA enabled

enzyme69 commented 1 year ago

I use my own venv but cannot install this

conda install pytorch cudatoolkit=11.3 -c pytorch -y
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.

PackagesNotFoundError: The following packages are not available from current channels:

  - cudatoolkit=11.3

Current channels:

  - https://conda.anaconda.org/pytorch/osx-arm64
  - https://conda.anaconda.org/pytorch/noarch
  - https://conda.anaconda.org/conda-forge/osx-arm64
  - https://conda.anaconda.org/conda-forge/noarch

To search for alternate channels that may provide the conda package you're
looking for, navigate to

    https://anaconda.org

and use the search bar at the top of the page.
nateraw commented 1 year ago

Can you try installing the latest version (0.6.0) and running this?

from stable_diffusion_videos import StableDiffusionWalkPipeline

pipeline = StableDiffusionWalkPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
pipeline = pipeline.to('mps')

video_path = pipeline.walk(
    ['a cat', 'a dog'],
    [42, 1337],
    fps=5,                      # use 5 for testing, 25 or 30 for better quality
    num_interpolation_steps=5,  # use 3-5 for testing, 30 or more for better results
    height=512,                 # use multiples of 64 if > 512. Multiples of 8 if < 512.
    width=512,                  # use multiples of 64 if > 512. Multiples of 8 if < 512.
)
nateraw commented 1 year ago

Actually lets keep this discussion in one place. Lets discuss in #38