MIC-DKFZ / nnUNet

Apache License 2.0
5.45k stars 1.67k forks source link

Performance metrics questions #2189

Closed Peaceandmaths closed 1 month ago

Peaceandmaths commented 2 months ago

Dear nnunet team,

I am using nnunetv2, nnUNetv2_evaluate_folder function to evaluate predicitons. Some questions about the metrics reported in validation summary.json and summary.json in the postprocessed predictions :

1) I would like to implement more metrics calculated and reported in the validation and final test summary.json. How can I add Hausdorff Distance 95, False Positive Rate, Precision, Recall, Accuracy, False Negative Rate, True Negative Rate ?

2) Is there a differnece between foreground mean and mean ? I usually get the same number for both image

3) Is there a way to report results in the summary.json that are not only voxel-wise but also target-wise ( wrt to connected component ) ?

4) I see that, but what is the total number of voxels ? Or rather, is there a way to implement TP,TN,FP, FN rates in stead of voxel numbers ? n_pred = fp + tp n_ref = fn + tp

Thank you for your support, Katya

Merom99 commented 2 months ago

Hi @Peaceandmaths, under the evaluation folder, you will find the evaluator.py. Open it, and then you will find the class called Evaluator where you will see the default metrics and default advanced metrics, remove the hash # and it should work!

Peaceandmaths commented 2 months ago

@Merom99 Thanks, I also thought so, but there's no such function in the nnunetv2. If I go to the evaluation folder in nnunetv2, this is what I see and the advanced metrics mentioned above are not defined there. image

If I used nnunetv2 the whole time for training and predicitng, can I use the evaluator from nnunet (version1) to evaluate predictions using the old extensive code with all the metrics I need ?

mehnaz1985 commented 2 months ago

@Peaceandmaths , could you please txt the command for evaluation here?

is this right? nnUNetv2_evaluate_folder -ref Gt_folder_path -pred Prediction_folder_path -l 1

Peaceandmaths commented 2 months ago

@mehnaz1985 My command : nnUNetv2_evaluate_folder -djfile dataset.json_path -pfile nnUNet_results/Dataset{dataset_id}/nnUNetTrainer_nnUNetPlans__3d_fullres/plans.json --chill test_labels_path predicted_labels_path

ykirchhoff commented 2 months ago

Hi @Peaceandmaths,

nnUNetv2 only contains code for evaluation of the metrics included in the summary.json files, so Dice and IoU. You can add you own metrics by adding them here, you can find some code for different metrics in nnUNetv1.

Best, Yannick