ljwztc / CLIP-Driven-Universal-Model

[ICCV 2023] CLIP-Driven Universal Model; Rank first in MSD Competition.
Other
521 stars 58 forks source link

CUDA out of memory inference half way #55

Closed sharonlee12 closed 6 months ago

sharonlee12 commented 7 months ago

When testing the second image, I reported an error 33%|████████████████████████ | 1/3 [01:33<03:06, 93.34s/it] Traceback (most recent call last): File "mytest.py", line 223, in main() File "mytest.py", line 219, in main validation(model, test_loader, val_transforms, args) File "mytest.py", line 55, in validation pred = sliding_window_inference(image, (args.roi_x, args.roi_y, args.roi_z), 1, model, overlap=0.5, mode='gaussian') File "/python3.8/site-packages/monai/inferers/utils.py", line 215, in sliding_window_inference output_image_list.append(torch.zeros(output_shape, dtype=compute_dtype, device=device)) RuntimeError: CUDA out of memory. Tried to allocate 2.70 GiB (GPU 0; 11.91 GiB total capacity; 8.87 GiB already allocated; 2.31 GiB free; 8.90 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF I try with torch.no_grad(): and so on...but it is not solved

ljwztc commented 6 months ago

you can set device for this API as torch.device(‘cpu’). More detail please refer to https://docs.monai.io/en/latest/inferers.html