Closed mustansarfiaz closed 2 years ago
Hi Mustansar,
Thanks for your interest in our paper.
In version 1, we used another way to calculate hd95 (which is now deprecated), and I recommend you use version 2 as it's written based on nnUNet configurations and settings. You have to specify the fold number during inference and use the following command to generate predictions for each fold:
CUDA_VISIBLE_DEVICES=0 vtunet_predict -i imagesTs -o inferTs/vtunet_tumor -m 3d_fullres -t 3 -f 0 -chk model_best -tr vtunetTrainerV2_vtunet_tumor If you want to get consolidated results, then you have to train the model for fold=1, fold=2, and fold=3 and run consolidate_postprocessing_simple.py
Kind regards, Himashi
Dear author, I find your paper very interesting and I'm new in the medical imaging field. I am trying to reproduce your model results for version 1 and version 2.
In version 1, I couldn't find the HD95 code, therefore I copied it from here. It gives the wrong HD95 during evaluation, can you please provide this version 1.
In version 2, when I run the inference code it gives the error shown in the figure. To resolve this issue I run consolidate_postprocessing_simple.py to compute the postprocessing.json file. But this says fold_0, fold_1, fold_2, etc are missing. I trained the model for fold=0 as described in the instructions. Could you please see this?