Closed eric8576 closed 1 year ago
Hi, the current training patch size was maximised for a 32GB GPU. You can try to reduce the training patch size, called pad_crop_shape in the code https://github.com/KCL-BMEIS/VS_Seg/blob/05065cbd97ebf5366695db5a1c3e8745f2295fb0/params/VSparams.py#L76 and accordingly the sliding_window_inferer_roi_size https://github.com/KCL-BMEIS/VS_Seg/blob/05065cbd97ebf5366695db5a1c3e8745f2295fb0/params/VSparams.py#L96 Values should be multiples of 2, for example you can try [256, 256, 64].
It work!Thank you for your reply!
Hi, my computer GPU is 24GB, but when code run to epoch 2, loss.backward() will show CUDA out of memory.the batch size is one. I do bot know how to fix this! thanks:)