I notice that the results for miou does not match with miou that I manually compute.
here is an example, lets say pres and labels are two lists including the predictions and gt data
I can compute the confusion matrix via chainercv.evaluations.calc_semantic_segmentation_confusion I also can compute the miou via chainercv.evaluations.eval_semantic_segmentation(preds, labels)
the miou based on confusion matrix can be computed as np.nanmean(np.diag(confusion) / (confusion.sum(axis=1) + confusion.sum(axis=0) - np.diag(confusion))) and these results dont match with np.nanmean(chainercv.evaluations.eval_semantic_segmentation(preds, labels)['iou'])
I notice that the results for miou does not match with miou that I manually compute. here is an example, lets say pres and labels are two lists including the predictions and gt data I can compute the confusion matrix via
chainercv.evaluations.calc_semantic_segmentation_confusion
I also can compute the miou viachainercv.evaluations.eval_semantic_segmentation(preds, labels)
the miou based on confusion matrix can be computed as
np.nanmean(np.diag(confusion) / (confusion.sum(axis=1) + confusion.sum(axis=0) - np.diag(confusion)))
and these results dont match withnp.nanmean(chainercv.evaluations.eval_semantic_segmentation(preds, labels)['iou'])