I'm trying to run the script using a GTX 1060 (the 3GB version) but I keep getting the following on the third step:
RuntimeError: CUDA out of memory. Tried to allocate 392.00 MiB (GPU 0; 2.94 GiB total capacity; 1.32 GiB already allocated; 349.88 MiB free; 1.38 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
This particularly happens when running the model.setup(opt) line. Is there anything that could be done to the script to make it work on my card? Thanks.
Hello,
I'm trying to run the script using a GTX 1060 (the 3GB version) but I keep getting the following on the third step:
RuntimeError: CUDA out of memory. Tried to allocate 392.00 MiB (GPU 0; 2.94 GiB total capacity; 1.32 GiB already allocated; 349.88 MiB free; 1.38 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
This particularly happens when running the
model.setup(opt)
line. Is there anything that could be done to the script to make it work on my card? Thanks.