Open Malitha123 opened 6 days ago
Hi,
unfortunately, I can't give a concise answer to this, but in this file you can follow the algorithm for how the plans are set for a given configuration: https://github.com/MIC-DKFZ/nnUNet/blob/master/nnunetv2/experiment_planning/experiment_planners/residual_unets/residual_encoder_unet_planners.py
Look at the get_plans_for_configuration
method and what changes for the M / L / XL configurations (hint: UNet_reference_val_2d
/ UNet_reference_val_3d
).
If that doesn't help, I'd propose creating plans for the individual configurations (M / L / XL) in your case (dataset) and looking at the plans differences.
A rough overview would be: For determining the network topology, this function is used: https://github.com/MIC-DKFZ/nnUNet/blob/master/nnunetv2/experiment_planning/experiment_planners/network_topology.py#L30
The above-mentioned get_plans_for_configuration
progressively reduces the patch size to match the target GPU memory estimate and adapts the topology following get_pool_and_conv_props
.
I hope this helps!
Thank you for your response. I will take a look
Hi,
Just out of curiosity, what is the fundamental difference between the ResEnc M, ResEnc L and ResEnc XL in architecture wise? Does it change the depth of the network?