Closed CenIII closed 4 years ago
Hi @CenIII, We used imagenet pretrained model for deeplabv2.
Hi @CenIII, We used imagenet pretrained model for deeplabv2.
The resnet backbone of deeplabv2 seems to have different configurations (e.g. different dilation rate or stride). How did you reload the weights of the pretrained model? (My understanding is that the difference in dilation rate wont affect the weight loading?)
Second question, did you freeze the first bn layer when training?
Thanks!
- Your understanding is right. We just followed the way of the original deeplab.
- Yes, the first conv and bn should be freezed when finetuning.
Thank you for replying!
Would you like to share the open source code you used to train deeplabv2? Thank you!
The same question as @CenIII , Thank you! @jiwoon-ahn
@CenIII, We made our own model from the bottom. I highly recommend to use the code of the author of DeepLab (https://bitbucket.org/aquariusjay/deeplab-public-ver2/src/master/)
@jiwoon-ahn Thanks! What do you mean by "made from bottom"? So what's the difference of your model and training process from the original one?
I believe none or little gap exists between our model from the original as we did our best to minimize the difference.
I believe none or little gap exists between our model from the original as we did our best to minimize the difference.
Got it. Thank you for your help!
When you were training Deeplabv2 using the pseudo-labels produced by your method, had the Deeplabv2 been pretrained before hand or just raw? Thanks.