Open quanlzheng opened 5 years ago
Thank you for your question! The code will be released a few days later. For ResNet, we use all the 3x3 conv layers, excluding the 1x1 conv layers. We also change the stride of "conv1" layer from 2 to 1. More details will be in the code, and I will inform you here when I'm free to release the code.
@yun-liu
Thank you for your question! The code will be released a few days later. For ResNet, we use all the 3x3 conv layers, excluding the 1x1 conv layers. We also change the stride of "conv1" layer from 2 to 1. More details will be in the code, and I will inform you here when I'm free to release the code.
Is the reason of using all the 3x3 conv layers and excluding the 1x1 conv layers in rcf that ResNet is deep enough? So there is no much requirement for the higher abstraction like 1x1 kernel in later layers?
And the main implementation challenges are how to efficiently utilizing the residual block during deconvolution (conv transpose) and up-sampling?
Thank you again! The reason why we only use 3x3 conv layers is because the 1x1 conv layers can not enlarge the receptive fields, while richer conv features that we refer to are conv features at different scales, i.e. with different receptive fields. We also change the stride of "conv1" layer from 2 to 1. Besides, the implementation is similar to VGG version. I will clean my code and release it after the CVPR deadline.
Welcome, ok, you are referring to the conv layers in between.
Did you release the code of RCF with ResNet50 and ResNet101 (prototxt files)?
Thank you so much.
Can you give an example to show how to apply it in resnet (eg., resnet101).
Do you use all the layers ?