naamiinepal / xrayto3D-benchmark

GNU General Public License v3.0
9 stars 3 forks source link

Evaluation on Original Resolution #1

Open msrepo opened 1 year ago

msrepo commented 1 year ago

Description

Current implementation of Metric evaluation code resamples the prediction and groundtruth to original resolution using metadata. This is better than evaluating at the preprocessed-resolution(which is lesser) but instead of resampling the preprocessed groundtruth, the evaluation should be wrt unprocessed groundtruth.

utils > callbacks.py > NiftiPredictionWriter
        metadict = prediction["seg_meta_dict"]
        if self.save_pred:
            self.pred_nifti_saver.save_batch(prediction["pred"], metadict)
        if self.save_gt:
            self.gt_nifti_saver.save_batch(prediction["gt"], metadict)