I am trying to train a model, and in stage 2 I get the following error:
RuntimeError: CUDA out of memory. Tried to allocate 1.71 GiB (GPU 0; 8.00 GiB total capacity; 3.55 GiB already allocated; 289.88 MiB free; 6.22 GiB reserved in total by PyTorch)
I have set the num_workers to 1
I am using a GTX 1080 card.
Am I unable to train on this card? Or can i adjust some parameters somewhere?
I am trying to train a model, and in stage 2 I get the following error:
RuntimeError: CUDA out of memory. Tried to allocate 1.71 GiB (GPU 0; 8.00 GiB total capacity; 3.55 GiB already allocated; 289.88 MiB free; 6.22 GiB reserved in total by PyTorch)
I have set the
num_workers
to1
I am using a GTX 1080 card.Am I unable to train on this card? Or can i adjust some parameters somewhere?
Thanks!