justinpinkney / stable-diffusion

MIT License
1.45k stars 266 forks source link

Colab demo #6

Closed thedarkzeno closed 1 year ago

thedarkzeno commented 2 years ago

Tried to run it on colab, but it doesn't get beyond the load_model_from_config Can someone share a working colab demo of the Image variations? thanks

About my attempt to run it on colab this is the output when I run image_variations.py I get:

Loading model from models/ldm/stable-diffusion-v1/sd-clip-vit-l14-img-embed_ema_only.ckpt
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Keeping EMAs of 688.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
^C
justinpinkney commented 2 years ago

I think that's hitting the system RAM limit which is why it gets killed. You might need Colab Pro to be able to run the model.

devonbrackbill commented 2 years ago

I found Colab and Colab Pro were consistently running into memory issues with this training approach. I didn't try Colab Pro+. Instead switching to another cloud provider like https://lambdalabs.com/service/gpu-cloud or https://www.runpod.io/ worked.

dvschultz commented 1 year ago

I’m able to get it to train on Colab Pro+. You definitely need an A100 for the added VRAM.