sinahmr / DIaM

Official PyTorch Implementation of DIaM in "A Strong Baseline for Generalized Few-Shot Semantic Segmentation" (CVPR 2023)
MIT License
63 stars 2 forks source link

Bam Novel ACCURACY vs Your Reproduced BAM Accuracy #1

Closed alpoler closed 1 year ago

alpoler commented 1 year ago

Hello,

Original BAM paper reports its novel class mIoU on PASCAL-5i as %47.93 while you state that you obtain %27.49 mIoU. What is difference between your and their configuration ? Why do you get smaller novel class mIoU on Pascal-5i than the original paper ?

sinahmr commented 1 year ago

Hi,

In the generalized setting we use, which is proposed here, all of the novel classes present in a query image have to be recognized. On the other hand, in BAM's experimental setting, prediction is only made for one novel class at a time (possibly alongside base classes). Consider PASCAL-5^i as an example. In our setting, the model segments a query image for 21 classes (bg, base classes, novel classes), whereas in BAM's generalized setting, the model only looks for 16 classes (bg, base classes, only one novel class). We believe being able to recognize all classes at once is more practical and this is a more realistic setting. We have adapted BAM to this setting, detailed in Appendix ("Adaptation of BAM to multi-class GFSS" section). As expected, having to recognize more classes decreases their novel performance.

alpoler commented 1 year ago

Thank you for your clarification. It sounds logical. I also consider generalized setting as practical. However, I even wonder how your method competes with BAM on standard few-shot segmentation configuration. Do you mind sharing the results for standard FSS if you have one ? I could not find any result for standard FSS on the paper.

sinahmr commented 1 year ago

We didn't tackle the classic FSS problem. Our method is specifically designed for GFSS and I don't think it is well-suited for classic FSS. All the experiments we have done are in the GFSS setting.