Current implementation of Metric evaluation code resamples the prediction and groundtruth to original resolution using metadata. This is better than evaluating at the preprocessed-resolution(which is lesser) but instead of resampling the preprocessed groundtruth, the evaluation should be wrt unprocessed groundtruth.
utils > callbacks.py > NiftiPredictionWriter
metadict = prediction["seg_meta_dict"]
if self.save_pred:
self.pred_nifti_saver.save_batch(prediction["pred"], metadict)
if self.save_gt:
self.gt_nifti_saver.save_batch(prediction["gt"], metadict)
Description
Current implementation of Metric evaluation code resamples the prediction and groundtruth to original resolution using metadata. This is better than evaluating at the preprocessed-resolution(which is lesser) but instead of resampling the preprocessed groundtruth, the evaluation should be wrt unprocessed groundtruth.