As discussed in #142, to facilitate evaluating predicted segmentation images, it would make sense to implement metrics that compare the predicted images with the ground truth in order to compare approaches more efficiently.
Some measures/distances that should at least be considered for this are sensitivity, specificity, Dice coefficient and Hausdorff distance. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) chapter "E. Evaluation Metrics and Ranking" describes them well.
Expected Behavior
Find metrics and distances that measure the deviation of a predicted, segmented image from its ground truth. Implement them so 2D as well as 3D images / data cubes can be passed to them.
As discussed in #142, to facilitate evaluating predicted segmentation images, it would make sense to implement metrics that compare the predicted images with the ground truth in order to compare approaches more efficiently. Some measures/distances that should at least be considered for this are sensitivity, specificity, Dice coefficient and Hausdorff distance. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) chapter "E. Evaluation Metrics and Ranking" describes them well.
Expected Behavior
Find metrics and distances that measure the deviation of a predicted, segmented image from its ground truth. Implement them so 2D as well as 3D images / data cubes can be passed to them.
Possible Solution
scipy.spatial.distance seems to already offer some nice stuff.
Acceptance criteria