File "/d1/daeun/custom-diffusion/stable-diffusion/ldm/modules/attention.py", line 258, in forward
x = block(x, context=context)
File "/d1/daeun/anaconda3/env/diff/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/d1/daeun/custom-diffusion/stable-diffusion/ldm/modules/attention.py", line 209, in forward
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
File "/d1/daeun/custom-diffusion/stable-diffusion/ldm/modules/diffusionmodules/util.py", line 116, in checkpoint
return func(*inputs)
File "/d1/daeun/custom-diffusion/stable-diffusion/ldm/modules/attention.py", line 212, in _forward
x = self.attn1(self.norm1(x)) + x
File "/d1/daeun/anaconda3/env/diff/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/d1/daeun/custom-diffusion/src/model.py", line 165, in new_forward
sim = einsum('b i d, b j d -> b i j', q, k) * self.scale
RuntimeError: CUDA out of memory. Tried to allocate 5.00 GiB (GPU 0; 11.91 GiB total capacity; 9.39 GiB already allocated; 848.94 MiB free; 10.24 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Hi, I have OOM problem while running the code below. I tried to run
sample.py
in thefinetune_gen.sh
.Anyone can help me? ;)