RuntimeError: CUDA out of memory. Tried to allocate 12.00 MiB (GPU 0; 14.76 GiB total capacity; 13.49 GiB already allocated; 7.75 MiB free; 13.70 GiB reserved in total by PyTorch)
I have my instance config is g4dn.16xlarge , batch size=8 and training samples = 1600 still i m getting this cuda out of memory .
it will be great if someone can explain about this (13.70 GiB reserved in total by PyTorch) what is this mean and how i can set up free this memory
RuntimeError: CUDA out of memory. Tried to allocate 12.00 MiB (GPU 0; 14.76 GiB total capacity; 13.49 GiB already allocated; 7.75 MiB free; 13.70 GiB reserved in total by PyTorch)
I have my instance config is g4dn.16xlarge , batch size=8 and training samples = 1600 still i m getting this cuda out of memory . it will be great if someone can explain about this (13.70 GiB reserved in total by PyTorch) what is this mean and how i can set up free this memory