在运行train.py程序时,一直提示“CUDA out of memory. Tried to allocate 900.00 MiB. GPU 0 has a total capacty of 14.75 GiB of which 62.81 MiB is free. Process 118187 has 14.68 GiB memory in use. Of the allocated memory 14.52 GiB is allocated by PyTorch, and 36.59 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation”,通过翻阅以前的问题,我尝试了将pin-memory设置为false或将train_fusion中的batch_size改为args.batch_size还是出现上述问题,请问这是因为显存不足引起的问题吗,我使用的谷歌colab的T4 GPU
在运行train.py程序时,一直提示“CUDA out of memory. Tried to allocate 900.00 MiB. GPU 0 has a total capacty of 14.75 GiB of which 62.81 MiB is free. Process 118187 has 14.68 GiB memory in use. Of the allocated memory 14.52 GiB is allocated by PyTorch, and 36.59 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation”,通过翻阅以前的问题,我尝试了将pin-memory设置为false或将train_fusion中的batch_size改为args.batch_size还是出现上述问题,请问这是因为显存不足引起的问题吗,我使用的谷歌colab的T4 GPU