OutOfMemoryError: CUDA out of memory. Tried to allocate 86.00 MiB (GPU 0; 11.00 GiB total capacity; 10.22 GiB already allocated; 0 bytes free; 10.26 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF #176
OutOfMemoryError: CUDA out of memory. Tried to allocate 86.00 MiB (GPU 0; 11.00 GiB total capacity; 10.22 GiB already allocated; 0 bytes free; 10.26 GiB reserved in total by PyTorch) If reserved memory is >>
allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I want to use two GPU 1080Ti to run "python finetune.py --data_path ./sample/merge.json --test_size 2000"
But just 1 GPU is running, and then the CUDA out of memory, How to set 2 GPU, I try to Edit "finetune.sh":TOT_CUDA="0,1", but in vain.
Thank U!
If you have not modified the parameters inside the code, the 1080Ti is enough memory.
The way you set up the two GPUs is correct.
You can set the mirco batch size to a smaller size to avoid OOM
OutOfMemoryError: CUDA out of memory. Tried to allocate 86.00 MiB (GPU 0; 11.00 GiB total capacity; 10.22 GiB already allocated; 0 bytes free; 10.26 GiB reserved in total by PyTorch) If reserved memory is >>
allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I want to use two GPU 1080Ti to run "python finetune.py --data_path ./sample/merge.json --test_size 2000" But just 1 GPU is running, and then the CUDA out of memory, How to set 2 GPU, I try to Edit "finetune.sh":TOT_CUDA="0,1", but in vain. Thank U!