Closed kinux98 closed 1 year ago
A1. We used the pre-trained weight based on CPS for fair competition, and we also utilise the same sample ID, resnet architecture as CPS.
A2. We didn't compare such difference and the reason is in A1.
A3. Same reason as in A1.
A4. Same reason as in A1.
Please feel free to manage experiments based on the original image-net checkpoint and resnet architecture, but we prefer to follow the released work. Note: all the results in the deeplabv3+ experimental tables are borrowed from CPS, and they are based on "deep stem" if you read the code in here.
By the way, the other peer works utilise more dilation convolutions for the last 2 layers in ResNet which seems can further improve the performance. Cheers
Thanks for your reply! I was just wondering why such work did not mention it (using a modified version of resnet) on their paper, including CPS. Anyway, I got it. Thanks for sharing your work.
Hello, thanks for sharing your excellent work!
I have questions about ImageNet pre-trained weights and ResNet architecture.
As you have already mentioned on Getting Started page, you utilized the same checkpoints as provided by the CPS.
Q1. But why did you use privately offered pre-trained weights (by CPS authors) instead of official PyTorch ImageNet resnet pretrained weight? Is there any reason for that?
Q2. Also, have you observed any final performance difference between the two versions of pre-trained weights (by CPS and Pytorch)?
Q3. Why do you use a modified version of the ResNet backbone network? It differs from the original one, AFAIK (you use deep stem resnet by Hang Zhang, not the original one). And you should have mentioned it on the paper... Is it fair to compare your results with other previous work?
Q4. Can you release the performance of your work with the original resnet backbone network (or, to say, torchvision resnet), which utilizes Pytorch-provided ImageNet pretrained weight?
Thanks.