When testing the second image, I reported an error
33%|████████████████████████ | 1/3 [01:33<03:06, 93.34s/it]
Traceback (most recent call last):
File "mytest.py", line 223, in
main()
File "mytest.py", line 219, in main
validation(model, test_loader, val_transforms, args)
File "mytest.py", line 55, in validation
pred = sliding_window_inference(image, (args.roi_x, args.roi_y, args.roi_z), 1, model, overlap=0.5, mode='gaussian')
File "/python3.8/site-packages/monai/inferers/utils.py", line 215, in sliding_window_inference
output_image_list.append(torch.zeros(output_shape, dtype=compute_dtype, device=device))
RuntimeError: CUDA out of memory. Tried to allocate 2.70 GiB (GPU 0; 11.91 GiB total capacity; 8.87 GiB already allocated; 2.31 GiB free; 8.90 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I try with torch.no_grad(): and so on...but it is not solved
When testing the second image, I reported an error 33%|████████████████████████ | 1/3 [01:33<03:06, 93.34s/it] Traceback (most recent call last): File "mytest.py", line 223, in main() File "mytest.py", line 219, in main validation(model, test_loader, val_transforms, args) File "mytest.py", line 55, in validation pred = sliding_window_inference(image, (args.roi_x, args.roi_y, args.roi_z), 1, model, overlap=0.5, mode='gaussian') File "/python3.8/site-packages/monai/inferers/utils.py", line 215, in sliding_window_inference output_image_list.append(torch.zeros(output_shape, dtype=compute_dtype, device=device)) RuntimeError: CUDA out of memory. Tried to allocate 2.70 GiB (GPU 0; 11.91 GiB total capacity; 8.87 GiB already allocated; 2.31 GiB free; 8.90 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF I try with torch.no_grad(): and so on...but it is not solved