somepago / DCR

Official Pytorch repo of CVPR'23 and NeurIPS'23 papers on understanding replication in diffusion models.
Apache License 2.0
102 stars 5 forks source link

python diff_inference.py get OutOfMemoryError #10

Open whybfq opened 4 months ago

whybfq commented 4 months ago

I first generated the pictures using the diff_inference.py

python diff_inference.py -nb 4000 --dataset laion --capstyle instancelevel_blip --rand_augs rand_numb_add while I met File "/home/anaconda3/envs/diffrep/lib/python3.9/site-packages/diffusers/models/cross_attention.py", line 314, in call attention_probs = attn.get_attention_scores(query, key, attention_mask) File "/home/anaconda3/envs/diffrep/lib/python3.9/site-packages/diffusers/models/cross_attention.py", line 253, in get_attention_scores attention_probs = attention_scores.softmax(dim=-1) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.16 GiB (GPU 0; 15.46 GiB total capacity; 11.31 GiB already allocated; 2.48 GiB free; 11.39 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

while I still have a lot of gpu, thanks for your suggestion image

​​ ​​

somepago commented 4 months ago

Is it possible to do a single image inference of stabilityai/stable-diffusion-2-1 model on your GPU? Most of my experiments are conducted on A5000/A6000.

For the case you are testing, it is a simple SD 2.1 inference with modified prompts, you can perhaps change some hyperparameters here to see if the code runs.

whybfq commented 4 months ago

Thanks for your suggestion, while I set im_batch=1, num_inference_steps=5, nbatches set 4, still have this error image