Closed ily666666 closed 3 years ago
For FSS-1000, we also noticed that DAN uses evaluation metric called P-IoU, which is inconsistent to the original evaluation metric (mean IoU) used in original FSS-1000 paper; the original paper says "The metric we use is the intersection-over-union (IoU) of positive labels in a binary segmentation map." where positive labels indicate foreground mask labels. Due to such ambiguous demonstration, we also wondered if the mean IoU used in FSS-1000 paper is 'class-wise' average of IoU values (standard) OR 'example-wise' average of IoU values. We saw there is code of FSS-1000 available online, but it leaves out evaluation part (IoU computation) in it. The code of DAN isn't even available online, so we e-mailed the authors of DAN 3-4 times to ask about the metric (P-IoU) but we didn't get a single reply.
Thus we tried both class-wise & example-wise averages of IoUs in our evaluation and we obtained better results with example-wise metric. However, we used class-wise average of IoUs (mIoU used in PASCAL-5i and COCO-20i) when evaluating FSS-1000 because it is more widely used.
OK thank you very much. I have another question ,PascalVoc Database is only been split two parts, train and val , is it right?
Yes. We use the same train/validation splits used in all the previous few-shot seg. methods for a fair comparison.
BTW, for COCO-20i dataset, we found that some methods (including PFENet) use COCO2014 train/val while the others (i guess it was PMM, etc.) use COCO2017 train/val sets. However, two different mIoU results (with COCO2014 and COCO2017) were about the same in our experiments so we use COCO2014 following PFENet (for its state-of-the-art performance).
Thanks again.
I want to ask another question about ignore_index.I found in PFENet ,they do not compute the loss and IOU in ignore_index(which label is 255) position.So, how do you deal with it ? I find ignore_idxs
in your code ,but they donnt seem to be utilized.
Although you give the code, I also want to ask in order to understand it more clearly?
Thanks
PASCAL VOC dataset uses special label called "ignore_label" which marks pixel regions (usually near object boundaries) ignored during IoU computation because pixel-wise segmentation near object boundaries is ambiguous to perform even for human annotators.
Since PASCAL-5i is the subset of PASCAL VOC, this evaluation scheme is naturally transferred to PASCAL-5i while neither COCO-20i nor FSS-1000 use such special label in their evaluations. Thus ignore_idxs are utilized only in PASCAL-5i.
This is more detailed in Appendix C of our paper (https://arxiv.org/pdf/2104.01538.pdf) and the github issue I opened recently: https://github.com/Jia-Research-Lab/PFENet/issues/25
Thank you ,my dear Korea friend.
I noticed ‘We freeze the pre-trained backbone networks to prevent them from learning class-specific representations of the training data.’ in your paper.But I have not found the related code , such asrequire_grad=False
.So, I want to know where is the related code about freezing backbone
Note that with torch.no_grad():
disables gradient calculation. In this mode, the result of every computation will have requires_grad=False
. We enable this mode in lines 48-52 of ./model/hsnet
to prevent backbone parameters being updated during training.
I noticed that
when computing loss, pixels in ignore_idx positon are reset to zero as background , and when computing mIOU, pixels in ignore_idx position are ignored in your project
Is it right ?
Yes it is only for PASCAL-5i dataset.
In FSS-1000, they said that "The metric we use is the IOU of positive labels in a binary segmentation map". And in DAN,the said that "Note that the metric on FSS-1000 is the IOU of positive labeels(P-IOU) in binary segmentation maps " But they donnot have a clear code to explain it. I think it is the sum of IOU on every test_img,and the divided the test_num. So, I want to ask a help for you ,do you think FSS metric have any difference to metric of PASCAL VOC(mIOU on classes)?