Closed Healingl closed 1 year ago
Hey @Healingl. It's normal for the MedNeXt to have higher memory consumption that the nnUNet. This is because the nnUNet's dynamic network architecture was designed with the specific goal of fitting into 11Gigs of memory during training while still giving good performance. On the other hand, MedNeXt was built to scale the network effectively without saturation of performance - with no real restrictions placed on memory consumption.
While both share the same preprocessing, data augmentation, training and eval frameworks, the architecture design principles are completely different. It's normal for MedNeXt to be consuming more memory.
Hope that helps.
Hey @Healingl. It's normal for the MedNeXt to have higher memory consumption that the nnUNet. This is because the nnUNet's dynamic network architecture was designed with the specific goal of fitting into 11Gigs of memory during training while still giving good performance. On the other hand, MedNeXt was built to scale the network effectively without saturation of performance - with no real restrictions placed on memory consumption.
While both share the same preprocessing, data augmentation, training and eval frameworks, the architecture design principles are completely different. It's normal for MedNeXt to be consuming more memory.
Hope that helps.
Thank you!
Hi, recently, I used the MedNext model (5.6M, Small (S)) to segment volumes with the image size of (80,160,160), but it requires 20G GPU memory, and nnUNet only needs 10G. Is it normal? can you help me to resolve the problem of excessive GPU memory consumption?