zengkun301 / NLSAN

MIT License
1 stars 0 forks source link

OutOfMemoryError: CUDA out of memory. #1

Open Shiqi72 opened 2 months ago

Shiqi72 commented 2 months ago

您好,我想问一下在×3训练时,batch_size和patch_size是如何设置的?我在复现时显示内存不足,您能否告知一下您的参数设置,十分感谢打扰了 torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 768.00 MiB. GPU 0 has a total capacty of 8.00 GiB of which 0 bytes is free. Of the allocated memory 13.53 GiB is allocated by PyTorch, and 416.80 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

zengkun301 commented 2 months ago

您好,我想问一下在×3训练时,batch_size和patch_size是如何设置的?我在复现时显示内存不足,您能否告知一下您的参数设置,十分感谢打扰了 torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 768.00 MiB. GPU 0 has a total capacty of 8.00 GiB of which 0 bytes is free. Of the allocated memory 13.53 GiB is allocated by PyTorch, and 416.80 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

您好。训练阶段的batch_size_per_gpu设置为8,LR图像的patch_size设置为64,用了8张3090卡训练大致1周左右。