Closed Ruikun-Li closed 1 year ago
Hi, thanks for pointing this out. Have you verified on one of the standard datasets that this happens there as well? If it works fine on those we might have to dive deeper into your dataset (and I will only be able to help if you can share it)
Hi, thanks for your suggestion. As suggested, I tried training and inference on the Task08 of the decathlon, and there was no such offset. So I think the problem may be caused by the inference process or the data processing in the second stage of the cascade model. However, the dataset used in my experiment is not yet public. So I'm going to dig into the code of the inference part and update this issue later.
Hey @Ruikun-Li
Did you manage to dig into the code of the inference part regarding your specific dataset? Otherwise you might try again with nnUNetV2 and see if the issue appears again?
I followed the tutorial to train the cascaded model. 1.nnUNet_train 3d_lowres nnUNetTrainerV2 TaskXXX_MYTASK FOLD 2.nnUNet_train 3d_cascade_fullres nnUNetTrainerV2CascadeFullRes TaskXXX_MYTASK FOLD 3.nnUNet_find_best_configuration -m 3d_lowres 3d_cascade_fullres -t XXX 4.the inference commands generated by nnUNet_find_best_configuration
However, whether using 5 fold or fold 'all' to train and inference, the predicted segmentation results in the second stage are significantly offset from the segmentation results in first stage, resulting in poor segmentation metrics.
I suspect that the problem is caused by the data processing during the inference phase, because inference on the training set also lead to this offset, but I didn't find a specific error. Furthermore, this offset does not occur when training a fullres model alone. The segmentation of the first stage (white) and the second stage (red) as shown in the figure below: