Open liujun0621 opened 1 month ago
Hi! The proposed loss indeed requires large memories (we use an A100 GPU with 40GB in training). In practice, you can use a small patch size or change the pixel_level to smaller values (16 or 8) for computational efficiency.
when i use 2080 or 4090 GPU,trainning error as follow:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.00 GiB (GPU 0; 10.75 GiB total capacity; 5.97 GiB already allocated; 2.42 GiB free; 8.08 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
thanks very much