ucas-vg / P2BNet

ECCV2022, Point-to-Box Network for Accurate Object Detection via Single Point Supervision
MIT License
63 stars 9 forks source link

The performance only based on Resnet50 #8

Closed liuyang-ict closed 1 year ago

liuyang-ict commented 1 year ago

Dear authors:

Thanks for your great job! What are the mean-IoU results of P2BNet when only ResNet50 is used without the FPN help?

By the way, would it be helpful if I were to share the backbone of both P2B and FasterRCNN?

Darren-pfchen commented 1 year ago

For first question, we didn't try it. But we try that on single level of FPN and it affects the performance little. You can try it by yourself and we can discuss about it. For the second, sharing the backbone of both P2B and FasterRCNN can boost the performance. We have a try, we train the two network together and the loss weight of P2B and Fasterrcnn is 1:4, the AP will increase abouy 1 or 2 point.

liuyang-ict commented 1 year ago

I found that the performance of the single level without FPN was significantly reduced, dropping to ~34mIoU for the PBR stage and ~36mIoU for the CBP stage. Interestingly, the mIoU in PBR stage is lower than CBP stage in the single-level setting. For the cooperate training, would you mind to share the source code for further exploration?

Darren-pfchen commented 1 year ago

For the cooperate training, it will be part of our new work, please wait for ICCV2023. For single resnet, it may because lacking feature fusion? you can choose the level which is stride 8 or stride 4 resolution. We did not study it, you can make some visualization.

liuyang-ict commented 1 year ago

Dear authors:

Thanks for your attention to this comment. There is another question about box coordinates for roi_extractor.

In your code reproduction of CBP and PBR stages, the box coordinates are generated according to the "img_meta['img_shape']" rather than "img_meta['pad_shape']" the real size of the input image after padding zeros. I wonder why using these coordinates for roi_extractor.

Look forward to your reply!

Thanks, Yang