Hello!There is an error occured:RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 2.00 GiB total capacity; 757.17 MiB already allocated; 24.89 MiB free; 44.83 MiB cached) #1
File "generate.py", line 137, in
main(args)
File "generate.py", line 111, in main
x_adv = fgsm(model, x, t, loss_func, eps)
File "D:\Graduate\APE-GAN-master\utils.py", line 14, in fgsm
loss.backward(retain_graph=True)
File "C:\Users\acer\Anaconda3\lib\site-packages\torch\tensor.py", line 102, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "C:\Users\acer\Anaconda3\lib\site-packages\torch\autograd__init__.py", line 90, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 2.00 GiB total capacity; 757.17 MiB already allocated; 24.89 MiB free; 44.83 MiB cached)
Do you have methods to solve it?Thanks!
File "generate.py", line 137, in
main(args)
File "generate.py", line 111, in main
x_adv = fgsm(model, x, t, loss_func, eps)
File "D:\Graduate\APE-GAN-master\utils.py", line 14, in fgsm
loss.backward(retain_graph=True)
File "C:\Users\acer\Anaconda3\lib\site-packages\torch\tensor.py", line 102, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "C:\Users\acer\Anaconda3\lib\site-packages\torch\autograd__init__.py", line 90, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 2.00 GiB total capacity; 757.17 MiB already allocated; 24.89 MiB free; 44.83 MiB cached)
Do you have methods to solve it?Thanks!