nateraw / stable-diffusion-videos

Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
Apache License 2.0
4.42k stars 421 forks source link

CUDA out of memory with pro colab #167

Closed quintendewilde closed 1 year ago

quintendewilde commented 1 year ago

{ "prompt": "8\u00a0mm footage of a horse", "guidance_scale": 7.5, "eta": 0.0, "num_inference_steps": 50, "upsample": false, "height": 832, "width": 640, "scheduler": { "num_train_timesteps": 1000, "beta_start": 0.00085, "beta_end": 0.012, "beta_schedule": "scaled_linear", "trained_betas": null, "skip_prk_steps": true, "set_alpha_to_one": false, "prediction_type": "epsilon", "steps_offset": 1, "_class_name": "PNDMScheduler", "_diffusers_version": "0.6.0", "clip_sample": false }, "tiled": false, "diffusers_version": "0.11.1", "device_name": "NVIDIA A100-SXM4-40GB" }

CUDA out of memory. Tried to allocate 18.57 GiB (GPU 0; 39.59 GiB total capacity; 21.88 GiB already allocated; 15.70 GiB free; 21.91 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I'm working with the pro package of colab en wit hall ram,gpu set at highest?

Any idea what's going wrong?

nateraw commented 1 year ago

can you please provide the colab/a snippet of what you ran? Would be helpful to know your batch size, etc.

AJV009 commented 1 year ago

i set a batch size of 4 and used the a100 runtime in colab pro and faced the same problem. OH ALSO I am using your sd2.1 snip from some other issue