Open davidkflau opened 1 month ago
Hello In my opinion patch size might play role and maybe Encoder Could you please share your training plans?
Here's the training plan. I also tried the EncoderM architecture, but it doesn't help much.
Here's the training plan. I also tried the EncoderM architecture, but it doesn't help much.
One more important question I forgot to ask: Did you count small segments in dataset ? I mean the number of separate segments with size less than 100 voxels for example
Is the issue is still persisting? If not it would be great if you @davidkflau would post your solution and close the issue for others that have similar questions
Hi team,
I'm training on brain lesion segmentation task. The lesions have various sizes, from <10mL to >200mL. We first feed the small and large lesion data to the nnUNet, but the average dice is not good, especially the dice of small lesions are bad. We then tried to train second model with large lesion data only and the dice are much better than the first model.
Just wonder if nnUNet works well on small object, and do you have any suggestion to segment small and large objects at the same time?
Thanks.