Closed ShawnHuang497 closed 9 months ago
Hi,
In your paper u said that u use batch size 6 with a patch size of 96 × 96 × 96 per device NVIDIA RTX A5000, which is with 24G video memory. But I use batch size 2 with a patch size of 96 × 96 × 96 per device NVIDIA RTX A5000, encountering cuda is out of memory. And I tried three backbones that were unet, swinunetr, unetpp, encountering the same issue.
There is a parameter --num_samples
that also effect the GPU consumption. So, the patch number in each card is num_samples * batch_size
.
In our experience, we can train the model in 24G card with 2 * 1
or 1 * 2
.