Closed Dwrety closed 2 years ago
Hi~
Interesting. Thanks for your answer, I reimplemented your OneNet in MMdet framework and got a better result with 34+ mAP in mask. After bring mask cost into the equation, I was able to get around 35.6 mAP in mask. I think the mask branch learns well enough, the cap in performance is mostly because of classification error, and this could be because of underfitting with only 1 anchor for each GT.
Very interesting work! I have read your implementation on the CondInst with OneNet matching. I've noticed there is a significant drop in mask AP compared to the original CondInst. What could be the causes?
Is it because a single positive sample per instance is not enough to train the mask branch? (I see you have doubled mask loss weight.) Or is it because of the adamw optimizer? Any further digging on this issue?
There is a second question. In the paper, it is described that all anchor box/points are used in cost calculation. Have you tried first using hand-crafted assignment (e.g. fcos) method and then matching? In other word, do the assignment results from matching still satisfy those hand-crafted methods? Love to hear from you. Thank you.