Closed zhi-xuan-chen closed 1 month ago
Yes.
Thanks! And I noticed you mask the predicted mask using the ignored mask generated from the true mask before computing the metric. But this metric won't be very reliable when the predicted mask gives many wrong predictions on the ignored mask. Is it more reasonable not to perform mask when calculating indicators, because there will be no ground truth for generating ignore mask in real situations.
Ignored mask are those regions in the training set where there are no annotations at all. They are not labeled by the pathologists and can possibly be any classes. So we can not calculating metrics on these region. It also not reasonable to treat them as a separate class because they might be existing classes. It just that they are not labeled. Ideal, we should cut out these regions in the training images. Masking them is a workaround.
I guess the authors of the BCSS dataset wanted some surrounding information of the annotated region so that the model can make more accurate decision. This is very common in computational pathology. E.g. the label of the pcam dataset only corresponds to the center part of the image rather than the entire image. Unlike pcan which is a classification dataset, for the BCSS dataset, we need to make sure the unlabeled regions do not generate any real losses nor contribute to our metrics.
The unlabeled region you mentioned is those regions labeled as "outside_roi"? And is the "background" label of the CRAG dataset also an unlabeled region?
So the CRAG dataset do not have ignored mask?
No, we only use the ignored masks in the BCSS dataset. There are no unlabeled regions in the CRAG dataset.
OK. And what is the difference between the unlabeled region and background? In my opinion, they are the same thing, so they need to be treated in the same way.
unlabeled region: There should be no real losses and metrics. background: There should be losses as normal but it should not be considered when calculating metrics.
I want to ask whether the metrics in your paper are all “micro” metrics without _bal.