Open Itsanewday opened 11 months ago
缩小patch_size
Thank you very much for your attention. Due to recent changes in my work, I couldn't reply promptly. During training in 3D, we set the batch size to 1. Of course, there's a trick: you can incorporate downsampling in the input part and upsampling before the output. Additionally, it's possible to optimize memory consumption by borrowing from parts of DSC2D_pro.
Thanks for sharing! When I apply DSC3D block in my own net and dataset. I meet the OOM problem, the training patch_size is 128x128x128, the batchsize is 2 and the input_channels is 32. How can i sovle this problem? Or there are any tricks to reduce the GPU Memroy? BTW, my GPU is GTX 3090 with 24GB memory.